A Political Fable - Zero to Infinity


Mean squared error drops when we average our predictions, but only because it uses a convex loss function. If you faced a concave loss function, you wouldn't isolate yourself from others, which casts doubt on the relevance of Jensen's inequality for rational communication. The process of sharing thoughts and arguing differences is not like taking averages.

Anything worse than the majority opinion should get selected out, so the majority opinion is rarely strictly superior to existing alternatives. Learning common biases won't help you obtain truth if you only use this knowledge to attack beliefs you don't like. Discussions about biases need to first do no harm by emphasizing motivated cognition, the sophistication effect, and dysrationalia, although even knowledge of these can backfire. Not being stupid seems like a more easily generalizable skill than breakthrough success. If debiasing is mostly about not being stupid, its benefits are hidden: Hence, checking whether debiasing works is difficult, especially in the absence of organizations or systematized training.

Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description. The Friedman Unit is named after Thomas Friedman who called "the next six months" the critical period in Iraq eight times between and This is because future predictions are created and consumed in the now; they are used to create feelings of delicious goodness or delicious horror now, not provide useful future advice.

After a point, labeling a problem as "important" is a commons problem. Rather than increasing the total resources devoted to important problems, resources are taken from other projects. Some grants proposals need to be written, but eventually this process becomes zero- or negative-sum on the margin. A prior is an assignment of a probability to every possible sequence of observations. In principle, the prior determines a probability for any event.

Formally, the prior is a giant look-up table, which no Bayesian reasoner would literally implement. Nonetheless, the formal definition is sometimes convenient. For example, uncertainty about priors can be captured with a weighted sum of priors. Some defend lottery-ticket buying as a rational purchase of fantasy. But you are occupying your valuable brain with a fantasy whose probability is nearly zero, wasting emotional energy.

Without the lottery, people might fantasize about things that they can actually do, which might lead to thinking of ways to make the fantasy a reality. To work around a bias, you must first notice it, analyze it, and decide that it is bad. Lottery advocates are failing to complete the third step.

If the opportunity to fantasize about winning justified the lottery, then a "new improved" lottery would be even better. You would buy a nearly-zero chance to become a millionaire at any moment over the next five years. You could spend every moment imagining that you might become a millionaire at that moment. As a human, I have a proper interest in the future of human civilization, including the human pursuit of truth.

That makes your rationality my business. The danger is that we will think that we can respond to irrationality with violence. Relativism is not the way to avoid this danger. Instead, commit to using only arguments and evidence, never violence, against irrational thinking. This post was a place for debates about the nature of morality, so that subsequent posts touching tangentially on morality would not be overwhelmed. Examples of questions to be discussed here included: What is the difference between "is" and "ought" statements? Why do some preferences seem voluntary, while others do not?

Do children believe that God can change what is moral? Is there a direction to the development of moral beliefs in history, and, if so, what is the causal explanation of this? Does Tarski's definition of truth extend to moral statements?

New

If you were physically altered to prefer killing, would "killing is good" become true? If the truth value of a moral claim cannot be changed by any physical act, does this make the claim stronger or weaker? What are the referents of moral claims, or are they empty of content? Are there "pure" ought-statements, or do they all have is-statements mixed into them? Are there pure aesthetic judgments or preferences? Strong emotions can be rational. A rational belief that something good happened leads to rational happiness.

But your emotions ought not to change your beliefs about events that do not depend causally on your emotions. In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space.

This is your problem, not reality's, and you are the one who needs to change. If reality consistently surprises you, then your model needs revision. But beware those who act unsurprised by surprising data. Maybe their model was too vague to be contradicted.

Crypto’s Political Coin Was “Too Hot To Handle”

Maybe they haven't emotionally grasped the implications of the data. Or maybe they are trying to appear poised in front of others. Respond to surprise by revising your model, not by suppressing your surprise. People justify Noble Lies by pointing out their benefits over doing nothing. But, if you really need these benefits, you can construct a Third Alternative for getting them. You have to search for one. Beware the temptation not to search or to search perfunctorily.

Ask yourself, "Did I spend five minutes by the clock trying hard to think of a better alternative? One source of hope against death is Afterlife-ism. Some say that this justifies it as a Noble Lie. But there are better because more plausible Third Alternatives, including nanotech, actuarial escape velocity, cryonics, and the Singularity. If supplying hope were the real goal of the Noble Lie, advocates would prefer these alternatives.

But the real goal is to excuse a fixed belief from criticism, not to supply hope. The human brain can't represent large quantities: Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

There are no risk-free investments. Even US treasury bills would fail under a number of plausible "black swan" scenarios. Nassim Taleb's own investment strategy doesn't seem to take sufficient account of such possibilities. Risk management is always a good idea. Also known as the fundamental attribution error, refers to the tendency to attribute the behavior of others to intrinsic dispositions, while excusing one's own behavior as the result of circumstance. Correspondence Bias is a tendency to attribute to a person a disposition to behave in a particular way, based on observing an episode in which that person behaves in that way.

The data set that gets considered consists only of the observed episode, while the target model is of the person's behavior in general, in many possible episodes, in many different possible contexts that may influence the person's behavior. People want to think that the Enemy is an innately evil mutant. But, usually, the Enemy is acting as you might in their circumstances. They think that they are the hero in their story and that their motives are just.

That doesn't mean that they are right. Killing them may be the best option available. But it is still a tragedy. This obsolete post was a place for free-form comments related to the project of the Overcoming Bias blog. School encourages two bad habits of thought: The first happens because students don't have enough time to digest what they learn. The second happens especially in fields like physics because students are so often just handed the right answer. Not every belief that we have is directly about sensory experience, but beliefs should pay rent in anticipations of experience.

For example, if I believe that "Gravity is 9. On the other hand, if your postmodern English professor says that the famous writer Wulky is a "post-utopian," this may not actually mean anything. The moral is to ask "What experiences do I anticipate? Suppose someone claims to have a dragon in their garage, but as soon as you go to look, they say, "It's an invisible dragon! And yet they may honestly believe they believe there's a dragon in the garage.

They may perhaps believe it is virtuous to believe there is a dragon in the garage, and believe themselves virtuous. Even though they anticipate as if there is no dragon. You can have some fun with people whose anticipations get out of sync with what they believe they believe. This post recounts a conversation in which a theist had to backpedal when he realized that, by drawing an empirical inference from his religion, he had opened up his religion to empirical disproof. A woman on a panel enthusiastically declared her belief in a pagan creation myth, flaunting its most outrageously improbable elements.

This seemed weirder than "belief in belief" she didn't act like she needed validation or "religious profession" she didn't try to act like she took her religion seriously. So, what was she doing? She was cheering for paganism — cheering loudly by making ridiculous claims. When you've stopped anticipating-as-if something is true, but still believe it is virtuous to believe it, this does not create the true fire of the child who really does believe. On the other hand, it is very easy for people to be passionate about group identification - sports teams, political sports teams - and this may account for the passion of beliefs worn as team-identification attire.

Religions used to claim authority in all domains, including biology, cosmology, and history. Only recently have religions attempted to be non-disprovable by confining themselves to ethical claims. But the ethical claims in scripture ought to be even more obviously wrong than the other claims, making the idea of non-overlapping magisteria a Big Lie. When your theory is proved wrong, just scream "OOPS! Don't just admit local errors. Don't try to protect your pride by conceding the absolute minimal patch of ground.

Making small concessions means that you will make only small improvements. It is far better to make big improvements quickly. This is a lesson of Bayescraft that Traditional Rationality fails to teach. If you are paid for post-hoc analysis, you might like theories that "explain" all possible outcomes equally well, without focusing uncertainty.

But what if you don't know the outcome yet, and you need to have an explanation ready in minutes? Then you want to spend most of your time on excuses for the outcomes that you anticipate most, so you still need a theory that focuses your uncertainty. Doubt is often regarded as virtuous for the wrong reason: But from a rationalist perspective, this is not why you should doubt. The doubt, rather, should exist to annihilate itself: When you can no longer make progress in this respect, the doubt is no longer useful to you as a rationalist.

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down - and how planets orbit the Sun, and how the Moon generates the tides - but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves. This post quotes a poem by Eugene Gendlin, which reads, "What is true is already so. If you think that the apocalypse will be in , while I think that it will be in , how could we bet on this?

One way would be for me to pay you X dollars every year until Then, if the apocalypse doesn't happen, you pay me 2X dollars every year until This idea could be used to set up a prediction market, which could give society information about when an apocalypse might happen. Yudkowsky later realized that this wouldn't work. A hypothesis that forbids nothing permits everything, and thus fails to constrain anticipation.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge. If an experiment contradicts a theory, we are expected to throw out the theory, or else break the rules of Science. But this may not be the best inference. If the theory is solid, it's more likely that an experiment got something wrong than that all the confirmatory data for the theory was wrong.

Latest Stories

In that case, you should be ready to "defy the data", rejecting the experiment without coming up with a more specific problem with it; the scientific community should tolerate such defiances without social penalty, and reward those who correctly recognized the error if it fails to replicate. In no case should you try to rationalize how the theory really predicted the data after all.

Absence of proof is not proof of absence. But absence of evidence is always evidence of absence. The absence of an observation may be strong evidence or very weak evidence of absence, but it is always evidence. If you are about to make an observation, then the expected value of your posterior probability must equal your current prior probability. On average , you must expect to be exactly as confident as when you started out.

If you are a true Bayesian, you cannot seek evidence to confirm your theory, because you do not expect any evidence to do that. You can only seek evidence to test your theory. Many people think that you must abandon a belief if you admit any counterevidence. Instead, change your belief by small increments. Acknowledge small pieces of counterevidence by shifting your belief down a little.

Supporting evidence will follow if your belief is true. It is tempting to weigh each counterargument by itself against all supporting arguments. No single counterargument can overwhelm all the supporting arguments, so you easily conclude that your theory was right. Indeed, as you win this kind of battle over and over again, you feel ever more confident in your theory. But, in fact, you are just rehearsing already-known evidence in favor of your view. Hindsight bias makes us overestimate how well our model could have predicted a known outcome.

We underestimate the cost of avoiding a known bad outcome, because we forget that many other equally severe outcomes seemed as probable at the time. Hindsight bias distorts the testing of our models by observation, making us think that our models are better than they really are. Hindsight bias leads us to systematically undervalue scientific findings, because we find it too easy to retrofit them into our models of the world.

This unfairly devalues the contributions of researchers. Worse, it prevents us from noticing when we are seeing evidence that doesn't fit what we really would have expected. We need to make a conscious effort to be shocked enough. For good social reasons, we require legal and scientific evidence to be more than just rational evidence. Hearsay is rational evidence, but as legal evidence it would invite abuse. Scientific evidence must be public and reproducible by everyone, because we want a pool of especially reliable beliefs. Thus, Science is about reproducible conditions, not the history of any one experiment.

The belief that nanotechnology is possible is based on qualitative reasoning from scientific knowledge. But such a belief is merely rational. It will not be scientific until someone constructs a nanofactory. Yet if you claim that nanomachines are impossible because they have never been seen before, you are being irrational.

To think that everything that is not science is pseudoscience is a severe mistake. People think that fake explanations use words like "magic," while real explanations use scientific words like "heat conduction.

Here's what Creepypasta fable Syfy's Channel Zero Season 2 is based on - Blastr

Scientific-sounding words aren't enough. Real explanations constrain anticipation. Ideally, you could explain only the observations that actually happened. Fake explanations could just as well "explain" the opposite of what you observed. In schools, "education" often consists of having students memorize answers to specific questions i. Thus, students incorrectly learn to guess at passwords in the face of strange observations rather than admit their confusion.

  1. .
  2. .
  3. Cognitive Monthly, May 2009: The Illusion of Theater.
  4. Technology Law: What Every Business (And Business-Minded Person) Needs to Know.
  5. The Venetian Secret, 1620 (The Morosini Family)!
  6. More Stories?
  7. Dragons Awakening (High Council Book 1)!

If your explanation lacks such a model, start from a recognition of your own confusion and surprise at seeing the result. You don't understand the phrase "because of evolution" unless it constrains your anticipations. Otherwise, you are using it as attire to identify yourself with the "scientific" tribe. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction.

A scientific rejection would require a theoretical model that bounds possible intelligences. If your proud beliefs don't constrain anticipation, they are probably just passwords or attire. It is very easy for a human being to think that a theory predicts a phenomenon, when in fact is was fitted to a phenomenon. Properly designed reasoning systems GAIs would be able to avoid this mistake with our knowledge of probability theory, but humans have to write down a prediction in advance in order to ensure that our reasoning about causality is correct.

There are certain words and phrases that act as "stopsigns" to thinking. They aren't actually explanations, or help to resolve the actual issue at hand, but they act as a marker saying "don't ask any questions. The theory of vitalism was developed before the idea of biochemistry. It stated that the mysterious properties of living matter, compared to nonliving matter, was due to an "elan vital". This explanation acts as a curiosity-stopper, and leaves the phenomenon just as mysterious and inexplicable as it was before the answer was given. It feels like an explanation, though it fails to constrain anticipation.

The theory of "emergence" has become very popular, but is just a mysterious answer to a mysterious question. After learning that a property is emergent, you aren't able to make any new predictions. Positive bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.

The concept of complexity isn't meaningless, but too often people assume that adding complexity to a system they don't understand will improve it. If you don't know how to solve a problem, adding complexity won't help; better to say "I have no idea" than to say "complexity" and think you've reached an answer. Traditional rationality without Bayes' Theorem allows you to formulate hypotheses without a reason to prefer them to the status quo, as long as they are falsifiable.

Even following all the rules of traditional rationality, you can waste a lot of time. It takes a lot of rationality to avoid making mistakes; a moderate level of rationality will just lead you to make new and different mistakes. There are no inherently mysterious phenomena, but every phenomenon seems mysterious, right up until the moment that science explains it.

It seems to us now that biology, chemistry, and astronomy are naturally the realm of science, but if we had lived through their discoveries, and watched them reduced from mysterious to mundane, we would be more reluctant to believe the next phenomenon is inherently mysterious.

It's easy not to take the lessons of history seriously; our brains aren't well-equipped to translate dry facts into experiences. But imagine living through the whole of human history - imagine watching mysteries be explained, watching civilizations rise and fall, being surprised over and over again - and you'll be less shocked by the strangeness of the next era. Imagine trying to explain quantum physics, the internet, or any other aspect of modern society to people from Technology and culture change so quickly that our civilization would be unrecognizable to people years ago; what will the world look like years from now?

When you encounter something you don't understand, you have three options: Although science does have explanations for phenomena, it is not enough to simply say that "Science! Yet for many people, simply noting that "Science has an answer" is enough to make them no longer curious about how it works.

  • The Samurai Strategy.
  • !
  • Small and Medium Sized Enterprises (Ocean Management and Policy Series)?
  • Bishop conquered the world – but couldn't face her own party.
  • NewsBusters.
  • Less Wrong/Article summaries - Lesswrongwiki;

In that respect, "Science" is no different from more blatant curiosity-stoppers like "God did it! You should only be satisfied with a predictive model, and how a given phenomenon fits into that model. Under some circumstances, rejecting arguments on the basis of absurdity is reasonable. The absurdity heuristic can allow you to identify hypotheses that aren't worth your time. However, detailed knowledge of the underlying laws should allow you to override the absurdity heuristic.

Less Wrong/Article summaries

Objects fall, but helium balloons rise. The future has been consistently absurd and will likely go on being that way. When the absurdity heuristic is extended to rule out crazy-sounding things with a basis in fact, it becomes absurdity bias. Availability bias is a tendency to estimate the probability of an event based on whatever evidence about that event pops into your mind, without taking into account the ways in which some pieces of evidence are more memorable than others, or some pieces of evidence are easier to come by than others.

This bias directly consists in considering a mismatched data set that leads to a distorted model, and biased estimate. New technologies and social changes have consistently happened at a rate that would seem absurd and impossible to people only a few decades before they happen. Hindsight bias causes us to see the past as obvious and as a series of changes towards the "normalcy" of the present; availability biases make it hard for us to imagine changes greater than those we've already encountered, or the effects of multiple changes.

The future will be stranger than we think. Exposure to numbers affects guesses on estimation problems by anchoring your mind to an given estimate, even if it's wildly off base. Be aware of the effect random numbers have on your estimation ability. If you make a mistake, don't excuse it or pat yourself on the back for thinking originally; acknowledge you made a mistake and move on.

If you become invested in your own mistakes, you'll stay stuck on bad ideas. The Radical Honesty movement requires participants to speak the truth, always, whatever they think. The more competent you grow at avoiding self-deceit, the more of a challenge this would be - but it's an interesting thing to imagine, and perhaps strive for. Advocates for the Singularity sometimes call for outreach to artists or poets; we should move away from thinking of people as if their profession is the only thing they can contribute to humanity.

Being human is what gives us a stake in the future, not being poets or mathematicians. Words like "democracy" or "freedom" are applause lights - no one disapproves of them, so they can be used to signal conformity and hand-wave away difficult problems. If you hear people talking about the importance of "balancing risks and opportunities" or of solving problems "through a collaborative process" that aren't followed up by any specifics, then the words are applause lights, not real thoughts.

George Orwell's writings on language and totalitarianism are critical to understanding rationality. Orwell was an opponent of the use of words to obscure meaning, or to convey ideas without their emotional impact. Language should get the point across - when the effort to convey information gets lost in the effort to sound authoritative, you are acting irrationally.

It's easy to think that rationality and seeking truth is an intellectual exercise, but this ignores the lessons of history. Cognitive biases and muddled thinking allow people to hide from their own mistakes and allow evil to take root. Spreading the truth makes a real difference in defeating evil. George Orwell wrote about what he called "doublethink", where a person was able to hold two contradictory thoughts in their mind simultaneously. While some argue that self deception can make you happier, doublethink will actually lead only to problems. We tend to plan envisioning that everything will go as expected.

Even assuming that such an estimate is accurate conditional on everything going as expected, things will not go as expected. As a result, we routinely see outcomes worse then the ex ante worst case scenario. Planning Fallacy is a tendency to overestimate your efficiency in achieving a task. The data set you consider consists of simple cached ways in which you move about accomplishing the task, and lacks the unanticipated problems and more complex ways in which the process may unfold.

Access Check

As a result, the model fails to adequately describe the phenomenon, and the answer gets systematically wrong. Nobel Laureate Daniel Kahneman recounts an incident where the inside view and the outside view of the time it would take to complete a project of his were widely different. However, in the psychology lab, subjects' judgments do not conform to this rule. This is not an isolated artifact of a particular study design. Debiasing won't be as simple as practicing specific questions; it requires certain general habits of thought. When it seems like an experiment that's been cited does not provide enough support for the interpretation given, remember that Scientists are generally pretty smart.

Especially if the experiment was done a long time ago, or it is described as "classic" or "famous". In that case, you should consider the possibility that there is more evidence that you haven't seen. If you want to avoid the conjunction fallacy, you must try to feel a stronger emotional impact from Occam's Razor. Each additional detail added to a claim must feel as though it is driving the probability of the claim down towards zero.

Evidence is an event connected by a chain of causes and effects to whatever it is you want to learn about. It also has to be an event that is more likely if reality is one way, than if reality is another. If a belief is not formed this way, it cannot be trusted. Part of what makes humans different from other animals is our own ability to reason about our reasoning. Mice do not think about the cognitive algorithms that generate their belief that the cat is hunting them. Our ability to think about what sort of thought processes would lead to correct beliefs is what gave rise to Science.

This ability makes our admittedly flawed minds much more powerful. If you are considering one hypothesis out of many, or that hypothesis is more implausible than others, or you wish to know with greater confidence, you will need more evidence. Ignoring this rule will cause you to jump to a belief without enough evidence, and thus be wrong. Albert Einstein, when asked what he would do if an experiment disproved his theory of general relativity, responded with "I would feel sorry for [the experimenter]. The theory is correct. In order to even consider the hypothesis of general relativity in the first place, he would have needed a large amount of Bayesian evidence.

To a human, Thor feels like a simpler explanation for lightning than Maxwell's equations, but that is because we don't see the full complexity of an intelligent mind. However, if you try to write a computer program to simulate Thor and a computer program to simulate Maxwell's equations, one will be much easier to accomplish. This is how the complexity of a hypothesis is measured in the formalisms of Occam's Razor. Wherever you are, whatever you're doing, take a minute to not destroy the world.

Someone tells you only the evidence that they want you to hear. Forced to update your beliefs until you reach their position? No, you also have to take into account what they could have told you but didn't. Rationality works forward from evidence to conclusions. Rationalization tries in vain to work backward from favourable conclusions to the evidence. But you cannot rationalize what is not already rational.

Most Viewed in Politics

A Political Fable - Zero to Infinity - Kindle edition by Alvin Schnupp. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like. A Political Fable - Zero to Infinity eBook: Alvin Schnupp: devmediavizor.archidelivery.ru: Kindle Store.

It is as if "lying" were called "truthization". You can't produce a rational argument for something that isn't rational. First select the rational choice. Then the rational argument is just a list of the same evidence that convinced you. We all change our minds occasionally, but we don't constantly, honestly reevaluate every decision and course of action.

Once you think you believe something, the chances are good that you already do, for better or worse. When people doubt, they instinctively ask only the questions that have easy answers. When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most.

If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills. The path to rationality begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books.

Eliezer's first step was to catch what it felt like to shove an unwanted fact to the corner of his mind. Singlethink is the skill of not doublethinking. Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating - as defections from cooperative norms. But viewing rationality as a social obligation gives rise to some strange ideas. The laws of rationality are mathematics, and no social maneuvering can exempt you.

The facts that philosophers call "a priori" arrived in your brain by a physical process. Thoughts are existent in the universe; they are identical to the operation of brains. The "a priori" belief generator in your brain works for a reason. Even slight exposure to a stimulus is enough to change the outcome of a decision or estimate. Contamination by Priming is a problem that relates to the process of implicitly introducing the facts in the attended data set. When I saw the girl I saw the form, and when I saw the form I saw the girl. I could barely even see. My mind was revolting against what it was attempting to process.

I had been scared before in my life and I had never been more scared than when I was trapped in the fourth room, but that was before room six. I just stood there, staring at whatever it was that spoke to me. There was no exit. I was trapped here with it. And then it spoke again. Skip to main content. Sign in to comment.

No-End House Jeff Spry. The Crimes of Grindelwald Tag: TV This Week Tag: I was hyperventilating when I threw my phone to the floor of my car. CLICK , call over. Is it possible to be arrested from another country? Is that what extradition means? Did I spawn an international incident? With buttery fingers, I checked my phone. The theater was the only place I could think of to go after my world collapsed with the conference call I left 20 minutes before. So, I propped my feet on the headrest of the empty seat in front of me and retraced every step of the project.

I was flying home after a week away, working my day job as a traveling solution engineer for an enterprise content management software company. The perfect post-grad job for someone with my degree. Packed in the transport, sardine style, we were all anxious to make our plane to Cleveland and pissed off at the minute delay. Dealing with a particularly difficult customer in D. C that week left me drained and defeated.

Honestly, I was just happy to be leaving the city and with a tired breath, turned to my phone in search of an ounce of control in my life. But extra money is always nice. As sardines in the tin can of the moving transport, we jiggled with every bump on the tarmac. I wanted to purchase an entire coin, but was willing to take what I could get. Hugging the standing pole of the transport, I stood staring at the Buy screen with an uncommitted transaction that longed to be confirmed or denied.

My thumb hovered over complete purchase. I was waiting for courage. Turns out, all I needed was a deep crack in the tarmac to inspire a jolt from the bus. Phone in hand, my body lurched forward to collide with the passenger standing in front of me. After the cabin settled and I finished an awkward apology with the gentlemen I bumped, I looked to my phone.

The Sydney Morning Herald

After amassing over 33, Telegram chat members in seven days, my team and I decided to end the airdrop early in an attempt to carry our momentum into the yet-to-be planned ICO. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. Human beings can fall into a feedback loop around something that they hold dear. Masking my desperation, I crafted the perfect response. Tackles the Hollywood Rationality trope that "rational" preferences must reduce to selfish hedonism - caring strictly about personally experienced pleasure. Similarly, it isn't scientific to reject strongly superhuman AI only because it sounds like science fiction.

To my foggy and frazzled mind, the letters at the Buy screen rearranged themselves,. My once empty CoinBase wallet was now blossoming with over half a Bitcoin, half of something I knew nothing about. A surge of body-tingling anxiety balanced by a tickle of excitement overwhelmed me as I came to that realization. I had officially sipped the Kool-Aide… and it was delicious. After landing home, I immediately began consuming crypto content. Money is a hell of a motivator and I had to know what was to come of mine.

That, and it was also nice to have a new side project. By December, the success of Bitcoin caught on with the mainstream and pushed my co-workers to invest with me. Soon, a small troop of us would meet in a pod of desks at work between meetings and client calls to discuss new coins, progressive tech and the future of money. We rode out the neverending bull runs for 8—10 weeks, going to bed in the green and waking up even greener. My hands grew sore from endless high-fives and my cheeks began to cramp from the bottomless excitement as we eagerly threw more and more money into the flourishing ecosystem, reaping immediate gains.

The frenzy could never end… until it did. My curiosity towered my fear. By the end of January, the crypto market continued its complete collapse. Through the confidence only achieved by dumb luck, we agreed we wanted to do more than just invest. It was clear we knew very little of the blockchain framework actually supporting cryptocurrencies, but were naive enough to think we could launch a crypto project anyway. Without the crutch of a life-changing application for blockchain or a progressive codebase, our only hope was to create a coin that could yell, really , really loudly.

It was two weeks after Red Tuesday and ever since, my crypto conversations with the bandits had fallen away. We all went back to concentrating on our day jobs and our real futures as solution engineers. A coin against Trump! Admittedly, I was politically-naive and still am , but understood that Donald Trump was the epicenter of passionate debate around the world. From the most left-sided liberals to right-wing conservatives, Trump inspired bottomless chatter and everyone had their opinions.

If the project was to have any chance of seeing the light of day, one of us would have to learn. The irritability in my neck intensified as my thoughts continued festering about the viral possibilities of our uncreated crypto. As January inched along, I remember looking forward more and more to 5: Fueled by anti-Trump headlines and an even more desperate appetite to learn, I embarked on my new crypto tangent.

It was clear that if this project was going to evolve beyond conversations of dollar signs among laughs, I would need to be the one to figure out how. My research shifted from market conditions to development framework and cryptocurrency parameters. In college, I did well in my two object-oriented programming classes, but I am not a developer and blockchain is not object oriented programming. But his videos were carpet bombed with jargon I was unfamiliar with, forcing me to pause the videos every 30 seconds, Google a term, annotate an article that detailed what the term was and why it was important to cryptos, before navigating back to YouTube to resume the video.

This process turned every minute video into a feature film as I started to understand. What could I lose? The feeling of having my very own space on the public blockchain, searchable and tradeable by anyone in the world, was the first of many rewarding experiences throughout the project. Arrogance is a damned thing. I folded like a deck of cards at his desk as I submitted my two weeks notice. It was a disaster, complete with shaky voice, sweaty palms and the heartbeat of Usain Bolt finishing the meter dash.

My boss was shocked.