Looking for a full, FREE Thinking, Fast and Slow summary?
You're in the right place!
Here's what you'll find on this page...
Thinking, Fast and Slow Review
Thinking Fast and Slow cuts to the heart of decision-making. It examines the fundamental cognitive processes we use day-to-day. It’s no simple self-help ‘recipe’. But if you’re interested in challenging the basis of your thinking, then it makes for a fascinating read.
As you might expect from a Nobel Prize-winning psychologist, the book is information-rich. It’s packed full of economic science, behavioral studies, and personal anecdotes.
Despite this, Kahneman’s conversational style makes it easy to keep turning pages. But this is one book you don’t want to race through! I recommend taking it slow (yes – I’m already putting the book’s insights into practice!) so you have time to digest its ideas, and consider how they apply to your psychological nature.
Why? Because changing mental processes is hard. You may even feel defeated as Kahneman reveals just how deep our biases run, and how flawed our sense of voluntary control really is.
But you’ll also quickly learn to identify the type of thinking you’re applying to situations. Which will help you question your assumptions and start making better decisions.
There’s no way to summarise a good book without losing important information, so I strongly recommend reading the original.
For now, though, here’s TAoL’s book summary of Thinking, Fast and Slow…
Thinking, Fast and Slow Summary
The Two Systems: System 1 & System 2
Kahneman tells us we use two systems for thinking.
System 1 is a fast thinker. We use it to rapidly recall facts we know well, like the capital of France. We also use it to intuitively process information we need quickly, like discerning emotions from facial expressions. System 1 requires little effort and can make quick decisions.
System 2 is a slow thinker. This is the conscious decision-maker. It uses logic to tackle complex computations that are too difficult or unfamiliar for System 1, like math problems. We also use it to intentionally control our behavior, like staying polite when we’re angry. System 2 requires attention and effort.
System 1 lets us respond quickly and instinctively to a wide range of fast and ever-changing inputs. System 2 is more thorough and logical but also slower and more resource intensive.
Splitting thinking between System 1 and System 2 usually works well. System 1 takes care of low-level tasks. System 2 only takes over when System 1 fails because of difficulty, time pressure, or novelty (new information).
This is an efficient and effective way for our minds to operate because we have limited mental energy. To preserve it, we use System 1 by default and only switch to System 2 when required.
(Note: There’s a proposed a link between IQ and our ability to switch to System 2. A study – recently challenged – gave four-year-olds a choice to eat a snack immediately or wait fifteen minutes for a larger reward. Those who waited demonstrated self-control and activation of their System 2, and years later scored higher on IQ tests.)
Problems Switching to System 2
Mistakes happen when switching to System 2 doesn’t happen appropriately.
This makes us fall back on the learned behaviors and shortcuts (‘heuristics’) of System 1 when we shouldn’t, which leads to cognitive biases that sabotage thinking and decision making.
Here’s a full list of biases and ideas covered in this Thinking, Fast and Slow summary:
- Cognitive Ease
- Creating New Norms
- Confirmation Bias
- The Halo Effect
- Intuitive Judgements
- The Law of Small Numbers
- Anchoring Effect
- Availability Bias
- The Conjunction Fallacy
- Causes Trump Statistics
- Regression to the Mean
- Taming Intuitive Predictions
- Hindsight Bias
- Expert Intuition
- Insider Blindness
- Excessive Optimism
- Utility Theory
- Prospect Theory
- Loss Aversion
- Endowment Effect
- The Fourfold Pattern
Click a concept or cognitive bias to jump to its explanation.
Or read on as we dive in below…
Priming happens when exposure to one idea makes you more likely to favor related ones.
For example, when asked to complete the word fragment SO_P, people primed with ‘eat’ will usually complete SOUP. Meanwhile, people primed with ‘wash’ will usually complete SOAP.
System 1 helps us make quick connections between causes and effects, things and their properties, and categories. By priming us (i.e., “pre-fetching” information based on recent stimuli and pre-existing associations) System 1 helps us make rapid sense of the infinite network of ideas that fills our sensory world, without having to analyze everything.
Priming usually works because the most familiar answer is also the most likely. But priming causes issues when System 2 leans on these associations inappropriately.
Instead of considering all options equally, System 1 gives ‘primed’ associations priority instead of relying on facts.
Cognitive Ease describes how hard we think a mental task is.
It increases if the information is clear, simple, and repeated.
Biases happen when complex information (requiring System 2) is presented in ways that make problems look easier than they really are (falling back on System 1).
For example, researchers gave students the following puzzle: “A bat and ball cost $1.10. The bat costs one dollar more than the ball. How much does the ball cost?”
The question is not straightforward and needs System 2 to engage. But researchers found that System 1 often responds with an intuitive, incorrect answer of 10 cents (correct = 5 cents).
Interestingly, when the students saw the question in a less legible font, cognitive ease decreased. Far fewer made the error because System 2 was more likely to engage.
Because we try to conserve mental energy, if cognitive ease is high, and our brain thinks it can solve a problem with System 1, it will not switch to System 2.
Note: Advertising has long used this technique to persuade people to make instinctive purchase decisions. A good mood can also fool us into a state of cognitive ease.
Creating New Norms
System 1 uses norms to maintain our model of the world, and constantly updates them. They tell us what to expect in a given context.
Surprises should trigger the activation of System 2 because they are outside of these models. But if new information is presented in familiar contexts or patterns, System 1 can fail to detect this and fall back on old ‘norms’.
For example, when asked “How many animals of each kind did Moses take into the ark?”, many people immediately answer “two”. They do not recognize what is wrong because the words fit the norm of a familiar biblical context (but Moses ≠ Noah).
Failing to create new norms appropriately causes us to construct stories that explain situations and ‘fill in the gaps’ based on our existing beliefs. This leads to false explanations for coincidences, or events with no correlation (also known as ‘narrative fallacy’).
Confirmation bias is our tendency to favor facts that support our existing conclusions and unconsciously seek evidence that aligns with them.
Our conclusions can come from existing beliefs or values, or System 1 norms. They can also come from an idea provided to us as a System1 prime.
For example, if we believe that “Republicans are out to destroy our way of life”, we’ll subconsciously pay more attention and give more credit to evidence that confirms that belief while dismissing evidence that contradicts it. (And vice versa for Democrats.)
Confirmation bias affirms System 1 norms and priming. System 1 tends to prioritise evidence that supports pre-existing ideas. It will also discount evidence that contradicts them.
The Halo Effect
The halo effect is confirmation bias towards people. It’s why first impressions count.
In one study, experimenters gave people lists of traits for fictitious characters and found that participants’ overall views changed significantly just based on the order of the traits (positive or negative first).
The halo effect also stretches to unseen attributes that are unknowable and unrelated to the initial interaction.
For example, friendly people are also considered more likely to donate to charity and tall people are perceived as more competent (which may help explain the unexpectedly high number of tall CEOs in the world’s largest companies).
System 1 struggles to account for the possibility that contradictory information may exist that we cannot see. It operates on the principle that “what you see is what you get”.
System 1 helps us make quick, sometimes life-saving judgements.
From an evolutionary sense, this helped us scan for and avoid threats. But System 1 also has blind spots that can lead to poor analysis.
For example, System 1 is good at assessing averages but very poor at determining cumulative effects. When shown lines of different lengths, it is very easy for people to estimate the average length. It is very difficult for them to estimate the total length.
System 1 also can’t “forget” matches that have nothing to do with the question and interfere with the answer. In one study participants were given a story about a particularly gifted child. Most readily answered the question “How tall is a man who is as tall as the child was clever?” even though there is no logical way of estimating the answer.
Intuitive assessments can be so strong they override our thought process even when we know System 2 should be making the decision. For example, when voters base their decision on candidates’ photos.
Substitution is what happens when we conserve mental energy by replacing difficult questions with easier, related questions. This makes System 1 respond with a ‘mental shotgun’, and we fail to recognize that System 2 should be analyzing the problem.
For example, “How popular will the president be six months from now?” is substituted with “How popular is the president right now?” System 1 recalls the answer to the easier question with more cognitive ease.
Substitution happens more often when we’re emotional. For example, “How much would you contribute to save an endangered species?” is replaced with “How much emotion do you feel when you think of dying dolphins?”. System 1 matches giving money with the strength of emotion, even though the two are not directly related.
System 2 Relies on System 1 Thinking
System 2 builds on System 1 processes, which can have cognitive biases.
The Law of Small Numbers
The law of small numbers is when System 1 tries to explain results that are an effect of statistically small samples. This leads to System 1 inventing stories, connections and causal links that don’t exist.
For example, a study found that rural areas have both the lowest and highest rates of kidney cancer. This provoked theories about why small populations cause or prevent cancer, even though there is no link.
Small sample sets are just more likely to show extreme outcomes…
System 1 also tries to explain random results that appear to have a pattern. For example, in a sequence of coin toss results, TTTT is just as likely as TTTH. But the pattern in the first set triggers a search for a reason more than the second.
Anchoring is when we start our analysis and struggle to move away from an initially suggested answer.
When given a starting point for a solution, System 2 will “anchor” its answer to this figure. It will then analyse whether the true value should be lower or higher.
For example, a sign in a shop that says “Limit of 12 per person” will cause people to take more items than a sign that says “No limit per person.”
The customers anchor their purchase decision to the number 12 because they are primed by System 1. We don’t start from a neutral point when deducing the solution and System 2 starts from a bias. This technique is exploited in real estate sales and all other forms of negotiation.
Availability bias happens when our thinking is influenced by our ability to recall examples.
System 1 can recall memories better than other evidence because of the cognitive ease heuristic. This makes them feel truer than facts and figures and System 2 will give them more weight.
For example, we overestimate our contribution to group activities. The memory of ourselves completing a task is easier to recall than the memory of someone else doing it.
Availability also influences our estimation of risk. People’s experiences, or exposure through the media, cause most people to overestimate the likelihood and severity of otherwise rare risks and accidents.
Representations form when the stories and norms established by System 1 become inseparable from an idea. System 2 weighs System 1 representations more than evidence.
In one study, people were asked to give the probability that an example student would choose to study a particular degree. They paid more attention to the description of their character than statistics on student numbers. The “people who like sci-fi also like computer science” stereotype had more impact than “only 3% of graduates study computer science”.
The Conjunction Fallacy
The conjunction fallacy, or the Linda problem, happens because System 1 prefers more complete stories (over simpler alternatives).
A scenario with more variables is less likely to be true. But because it provides a more complete and persuasive argument for System 1, System 2 can falsely estimate that it is more likely.
For example, a woman (‘Linda’) was described to participants in a study as “single, outspoken and very bright”. They were more likely to categorize her as a “bank teller who is an active feminist” than “a bank teller”.
It is less probable that someone would belong in the first category (since it requires two facts to be true), but it feels truer. It fits with the norm that System 1 has created around Linda.
Note: This is why we struggle so much with Occam’s Razor
Causes Trump Statistics
The Causes Trump Statistics bias is caused by System 1 preferring stories over numbers.
For example, people were told that only 27% of subjects in an experiment went to help someone who sounded like they were choking in the next booth. Despite this low statistic, people predicted that subjects that appeared to be nice would be much more likely to help.
System 1 places more importance on causal links, than statistics.
A surprising anecdote of an individual case has more influence on our judgment than a surprising set of statistics. Some people were not told the results of the choking experiment but were told that the nice interviewees did not help. Their predictions of the overall results were then fairly accurate.
An extension to this bias is that we put more faith in statistics that are presented in a way that easily links cause and effect.
People in a study were told 85% of cars involved in accidents are green. This statistic was cited more in decision-making about car crashes than if participants were told that 85% of cabs in a city are green.
Statistically, the evidence is the same (if 85% of cars are green and cars of all colours crash equally then 85% of crashes will involve green cars) but one is more readily believed because it is structured to appeal to our love of storytelling.
Regression to the Mean
Failure to expect a regression to the mean is a bias that causes us to try to find plausible explanations for reversions from high or low performance.
Probability tells us that abnormally high (or low) results are most likely to be followed by a result that’s closer to the overall mean than another extreme result. But the tendency of System 1 to look for causal effects, makes people disregard averages and look for stories.
For example, the likelihood of a person achieving success in a particular sporting event is a combination of talent and chance. Average performance is the mean performance over time.
This means an above (or below) average result on one day is likely to be followed by a more average performance the next. This has nothing to do with the above-average result. It’s just that if the probability of good performance follows a normal distribution then a result that tracks the mean will always be more likely than another extreme result.
But people will try to find an explanation, such as high performance on the first day creates pressure on the second day. Or that high performance on the first day is the start of a “streak” and high performance on the second day is expected.
Taming Intuitive Predictions
Despite the pitfalls of relying on the intuitions of System 1, System 2 cannot function without it. We are often faced with problems where information is incomplete, and we need to estimate to come up with an answer.
Kahneman suggests the following process for overcoming the inaccuracies of System 1:
- Start from a ‘base’ result by looking at average or statistical information.
- Separately, estimate what you would expect the result to be based on your beliefs and intuitions about the specific scenario.
- Estimate what the correlation is between your intuitions and the outcome you are predicting.
- Apply this correlation to the difference between the ‘base’ and your intuitive estimate.
It is vital that we activate System 2 when trying to predict extreme examples, and avoid relying on the intuition that System 1 provides us with.
Overconfidence in System 1
Sometimes, we place too much confidence in System 1, which leads to biases.
Hindsight Bias is caused by overconfidence in our ability to explain the past.
When a surprising event happens, System 1 quickly adjusts our views of the world to accommodate it (see creating new norms).
The problem is we usually forget the viewpoint we held before the event occurred. We don’t keep track of our failed predictions. This causes us to underestimate how surprised we were and overestimate our understanding at the time.
For example, in 1972, participants in a study were asked to predict the outcome of a meeting between Nixon and Mao Zedong. Depending on whether they were right or wrong, respondents later misremembered (or changed) their original prediction when asked to re-report it.
Hindsight bias influences how much we link the outcome of an event to decisions that were made. We associate bad outcomes with poor decisions because hindsight makes us think the event should have been able to be anticipated. This was a common bias in reactions to the CIA after the 9/11 attacks.
Similarly, decision-making is not given enough credit for good outcomes. Our hindsight bias makes us believe that “everyone knew” an event would play out as it did. This is observed when people are asked about (and downplay) the role of CEOs in successful companies.
Illusion of Validity
The Illusion of Validity is caused by overconfidence in our ability to assess situations and predict outcomes.
People tend to believe that their skill, and System 1-based intuition, is better than blind luck. This is true even when faced with evidence to the contrary.
Many studies of stock market traders, for example, find the least active traders are the most successful. Decisions to buy or sell are often no more successful than random 50/50 choices. Despite this evidence, there exists a huge industry built around individuals touting their self-perceived ability to predict and beat the market.
Interestingly, groups that have some information tend to perform slightly better than pure chance when asked to make a forecast. Meanwhile, groups that have a lot of information tend to perform worse. This is because they become overconfident in themselves and the intuitions they believe they have developed.
Algorithms based on statistics are more accurate than the predictions of professionals in a field. But people are unlikely to trust formulas because they believe human intuitions are more important. Intuition may be more valuable in some extreme examples that lie outside a formula, but these cases are not very common.
So when can the experts be trusted?
Kahneman says some experts can develop their intuitions so their System 1 thinking becomes highly reliable. They have been exposed to enough variations in scenarios and their outcomes to intuitively know which course of action is likely to be best. In studies with firefighting teams, commanders were observed to use their intuition to select the best approach to fight a fire.
Not all fields yield experts that can enhance their intuitions, however. It needs immediate and unambiguous feedback, and the opportunity to practice in a regular environment. Expert intuition can be developed by chess players, for example, but not by political scientists.
Insider Blindness is a biased overconfidence that develops from within a team that is involved in completing a task.
This inside view is biased to be overconfident about the success of the task. It tends to underestimate the possibility of failure, in what is known as the ‘planning fallacy’.
For example, Kahneman describes a textbook that he and his colleagues started to write. They thought the project would take two years but an expert in curriculums found that similar projects took much longer. The team continued the project despite the outside view and the book took eight years to complete.
Insider blindness leads to assessments of difficulty based on the initial part of the project. This is usually the easiest part and is completed when motivation is highest. This “Inside View” can also better visualize best-case scenarios and does not want to predict that a project should be abandoned.
The planning fallacy can also be a result of optimism bias.
An optimistic outlook gives people overconfidence in their ability to overcome obstacles.
Despite statistical evidence, people believe that they will beat the odds. This is often observed in business start-ups.
A Canadian inventor’s organization developed a rating system that could predict the success of inventions. No products with a D or E rating have ever become commercial. But almost half of the inventors who received these grades continued to invest in their projects.
Overconfidence and optimism encourage people to set unachievable goals and take more risks. Whilst it can be useful in maintaining commitment, it can cause people to overlook the basic facts of why a venture is unlikely to succeed.
These reasons often have nothing to do with the abilities of the people involved. For example, a hotel that failed with six previous owners still sells to a seventh owner. Despite evidence to the contrary, the new owners believe they are the game-changing factor rather than location and competition.
Kahneman won his Nobel prize in economics for his development of the ‘prospect theory’. He argues choices around economics are not fully rational. They are influenced by System 1 thinking.
Economic models used to be built on the assumption that people are logical, selfish, and stable. A dollar was a dollar no matter what the circumstances.
But Bernoulli proposed the ‘utility theory’. A dollar has a different value in different scenarios, because of psychological effects.
For example, ten dollars has more utility for someone who owns one hundred dollars than for someone who owns a million dollars. We make decisions based not only on the probability of an outcome but on how much utility we can gain or lose.
The problem? Bernoulli’s theory cannot explain the outcomes of some economic, behavioral studies.
Kahneman built on utility theory with Prospect theory – the value of money is influenced by biases from System 1 thinking.
Prospect theory explains why economic decisions are based on changes in wealth, and that losses affect people more than gains.
Loss aversion is the part of prospect theory that says people will prefer to avoid losses rather than seek gains.
It is observed in many different scenarios. A golfer will play for par rather than for birdies and contract negotiations stall to avoid one party making a concession. We judge companies as acting unfairly if they create a loss for the customer or employees to increase profits.
Prospect theory is based on the premise that people make judgments based on the pain that they feel from a loss. Since decisions are influenced by an emotional reaction, prospect theory and aversion loss are a result of System 1.
Loss aversion runs so deeply, it leads to the sunk cost fallacy. People will take further risks to try and recover from a large loss that has already happened. There is a fear of regret, that the decision to walk away will lock in a loss.
The Endowment Effect
The endowment effect is the increase in value that people apply to a product because they own it.
Prospect theory says that loss aversion is stronger than any potential gain. This applies even when there is a guaranteed profit to be made from selling an owned product. The profit needs to be enough to overcome the endowment effect, or the perception of loss, for people to sell.
The Fourfold Pattern
The fourfold pattern is a four-by-four matrix of highly unlikely outcomes. It is made up of almost certain, and almost impossible, gains and losses…
- 1st Row: Example scenario.
- 2nd Row: Main emotion experienced.
- 3rd Row: How most people react.
- 4th Row: Likely reaction if scenario faced in court.
According to prospect theory, we overestimate highly improbable outcomes. This is because System 1 is drawn to the emotions associated with a highly unlucky or lucky outcome.
People become risk-averse to avoid an unlikely disappointment or seek out risk for the chance of an unlikely windfall.
For example, in scenarios with a 95% chance of winning, people will focus on the 5% chance of loss and choose a 100% win option with a lower value. They accept an unfavorable settlement.
In scenarios with a 5% chance of a large gain, like the lottery, people will choose this option over a sure win of a small amount. They reject a reasonable settlement.
Rare events, like plane crashes, also tend to be overestimated according to prospect theory. This is partly because the improbable outcome is overestimated. It is also because the chance of the event not happening is difficult to calculate. System 1 applies its ‘what you see is what you get’ bias.
In prospect theory, the context of how choices are presented affects our decision-making.
When people consider risk, they prefer to view choices one by one. The cumulative effect of decisions, which would give a better picture of overall risk, is more difficult to assess.
This is System 1’s tendency to frame questions into the one with the most cognitive ease, and the ‘what you see is what you get’ bias.
For example, in a coin flip where you can lose $100 or win $200, you will win more money the more times you play, because the average outcome is to gain $50 each time.
People that are asked if they would like to make five bets are more likely to accept than if people are asked on five separate occasions. They cannot consider the cumulative effect of many bets unless they are presented with it in a simple and obvious way.
The way that risk is presented can also provide a prime for a System 1 bias, especially if one option is associated with an emotional outcome like a loss.
For example, people were given two options for a medication program for a disease. One can save a third of the people who contract a disease, the second has a two-thirds probability that no one can be saved. The options are the same but most people will select the first.
Evaluating our Own Experiences
System 1 creates problems when we evaluate our experiences, including our happiness.
Kahneman states that we have two selves. The ‘experience self’ evaluates outcomes as they happen. The ‘remembering self’ evaluates the outcome after the event. The remembering self tends to have more power in decision-making.
The remembering self uses the peak-end rule, placing more importance on the end of an experience than any other part. It also suffers duration neglect, where the length of an experience is disregarded.
In a study, people were given the option of two painful procedures. One lasts eight minutes and gradually increases in pain until the end. The second lasts twenty-four minutes and has the same intensity of pain but in the middle. Most people will choose the second procedure.
The remembering self will also choose options that value the memory of an experience, more than the experience itself. Likewise, it will choose to suffer pain if that pain can be erased from memory.
Asking people about overall life satisfaction is an important measure in many studies. But System 1 tends to replace this difficult question with an easier one. The remembering self answers about how they have felt recently.
Kahneman proposes a better methodology for prompting the experience self. He has conducted studies that ask people to reconstruct their previous day. This gives a better measure of well-being and happiness and has resulted in better insights into how this is affected.
Thinking, Fast and Slow Contents
Thinking, Fast and Slow has 37 main chapters in 5 parts…
Part 1: Two Systems
- The Characters of the Story
- Attention and Effort
- The Lazy Controller
- The Associative Machine
- Cognitive Ease
- Norms, Surprises, and Causes
- A Machine for Jumping to Conclusions
- How Judgments Happen
- Answering an Easier Question
Part 2: Heuristics and Biases
- The Law of Small Numbers
- The Science of Availability
- Availability, Emotion, and Risk
- Tom W’s Specialty
- Linda: Less Is More
- Causes Trump Statistics
- Regression to the Mean
- Taming Intuitive Predictions
Part 3: Overconfidence
- The Illusion of Understanding
- The Illusion of Validity
- Intuitions vs. Formulas
- Expert Intuition: When Can We Trust It?
- The Outside View
- The Engine of Capitalism Part
Part 4: Choices
- Bernoulli’s Errors
- Prospect Theory
- The Endowment Effect
- Bad Events
- The Fourfold Pattern
- Rare Events
- Risk Policies
- Keeping Score
- Frames and Reality
Part 5: Two Selves
- Two Selves
- Life as a Story
- Experienced Well-Being
- Thinking About Life
Appendix A: Judgment Under Uncertainty
Appendix B: Choices, Values, and Frames
Thinking, Fast and Slow FAQs
Why Do I Think Fast and Slow?
People think fast and slow because of the two systems that operate our cognitive processes. System 1 thinks fast and automatically. It takes less energy and is our ‘default’ mode, but it is prone to errors and biases. System 2 thinks slowly and is triggered when a task is more difficult or there is new information. It is more capable but takes much more mental energy.
What is the Difference Between Fast and Slow Thinking?
Fast thinking is based on intuition and subconsciously learned processes. Slow thinking is based on deliberate, conscious choices. But slow thinking is often built on fast thinking heuristics. This means even if we assume we are making rational choices, they may be influenced and shaped by biases that we are unaware of.
Is Thinking Fast and Slow Worth Reading?
Thinking Fast and Slow is definitely worth reading if you have an interest in psychology, and how and why we make decisions. The scientific studies are highly engaging and Kahneman’s theories apply to a range of well-known human behaviors. This includes the sunk cost fallacy, risk aversion, and why first impressions count. If you’re looking for a ‘how to’, you might be disappointed. The crux of the book is that we can’t control the underlying fast thinking process. But we can learn to recognize when our slow thinking is being led astray.
Best Thinking, Fast and Slow Quotes
These Thinking, Fast and Slow quotes come from The Art of Living's ever-growing central library of thoughts, anecdotes, notes, and inspirational quotes.
"A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact."- Daniel Kahneman, Thinking, Fast and Slow
Thinking, Fast and Slow PDF Summary
Want to save this Thinking, Fast and Slow summary for later?
Click the link below to get this whole summary as a handy FREE PDF...
Note: Direct link to PDF. No email required.
Wish There Was a Faster/Easier Way?
Whenever you’re ready, here are four ways I can help you be more productive, find more balance and live life more on purpose…
- Curious? Discover how productive you really are… Take this free, 2-minute assessment to unlock your PQ and discover the top 25 habits you need to get big things done. Take the 2-minute quiz →
- Overwhelmed? Get a free chapter of my book… Let me show you how to beat procrastination, permanently, with this free sneak peek inside TAoL’s ultimate productivity primer. Download your free chapter →
- Stuck? Grab a 90-Day TRACKTION Planner… Get the tool thousands trust to help them take control of their time, master their habits and hit goals in every part of their lives. Order your 90-day planner →
- Burned out? Join the TRACKTION Community… Take the 6-week masterclass, get weekly group coaching, find accountability partners and connect with like-minded self-starters. Get started FREE →