Range

Range

Range, by David Epstein, makes the case for becoming a generalist in an increasingly specialized world. Instead of focusing on doing one deep, narrow discipline, we should try and connect the dots between many different fields.

Federer vs Tiger

Tiger Woods is the textbook example of a “specialize early” upbringing. His dad gave him a miniature club when he was just three years old, and instructed him in golf swings via pictures. His success is generally attributed to this diligent practice.

Roger Federer followed the exact opposite path. He played many different sports (tennis, ping pong, raquetball, badminton) and really didn’t care, so long as there was a ball involved. He claims his success came from having room to experiment and grow, and then fully

The book tries to answer the question: what approach is right? The focused specialist or the adaptible generalist?

Kind vs Unkind conditions

The circumstances for learning can vary, and often vary quite wildly.

Many times we think of “kind” learning conditions, in sport or in a game like chess. Feedback is immediate, the rules of the game are clear, and it’s easy to start improving by measuring your own reward function.

But what about “unkind” environments? Times where it’s unclear whether you are improving or not, and it’s difficult to understand what the rules are.

There are many cases where this lack of feedback happens in the real world… we see it in doctors who never see their patients return and teachers who lose track of their students.

Startup idea: what is currently an unkind environment that can be turned into a kind one? Or vice versa… which environments are “too kind” and don’t cause participants adequately pattern match?

Chess “humans-in-the-loop”

The best chess players today are “centaurs”. They take a variety of chess programming algorithms, but then have a team of humans make the final call.

These human/chess combinations are currently playing the highest level of chess in the world.

The thing that separates computers from humans is that computers are great at the “tactics”. Little bits and pieces of the game where you have to get them exactly right (moves over a series of sequences). Humans, on the other hand can perform better with strategy, overall positioning to get to the right tactics.

These tactics are the exact example of kind environments that computers excel at.

There’s an inherent prediction that more and more tactical problems will be solved by machines, while humans lay out the strategy.

Flexible vs programmed learning

Epstein talks a bunch about what it takes to get humans to learn, and why so much learning today doesn’t hit the mark.

Immediate reward isn’t as useful as a “struggle to learn”. In many cases, getting immediate feedback about a right or wrong answer causes us to forget it quickly rather than turn it over in our mind. I’ve definitely felt this in my own experience learning… I can still remember some old math competition solutions that I spent weeks trying to determine.

The Suzuki method is a great example of more flexible learning. It’s designed to teach kids music in the same way that they might hear. In the same way that kids learn a language, it builds upon little fragments to develop more musicianship than drills. “Jazz musicians can learn to play classical music, but classical musicians can never learn jazz”

All of this has me thinking that there should be a new set of startups/education which focus on these set of techniques. In a similar way that Duolingo teaches via spaced repetition, I wonder if there are more interactive ways of learning which pull from the same playbook.

How useful is “grit”?

There’s two interesting studies cited here.

West point graduates go through “the beast”, a set of grueling trainings in order to become eventual military leaders. They have to do everything from physical conditioning to taking off gas masks and exposing themselves to tear gas. Surprisingly, this “grit-first” approach causes most participants to leave the army… but not immediately. The retention rates for graduates were lowest after their 5-year tour of duty with the army. Epstein’s takeaway is that these soldiers had to make a choice at such a young age that they end up wanting to switch to a new field with a better match, far more than they would if specialization were a competitive advantage.

Freakonomics ran a study asking folks making a big life decision to flip a virtual coin. If they got heads, they should do it. About 10% of the 20,000 respondents said they were making a job switch decision, and a fraction of that ended up following through with it. On average, the ones who had switched careers were happier, even though it was purely random chance which had made them do it.

Don’t be fooled by the data you see

There’s apparently a famous case study given to new MBA students, detailing the Carter Racing team.

In the study, the team has to decide between whether to run a particular race. If they race, they have the opportunity to win money and glory. Yet, the team may lose their car due to an engine failure, which would set them back both financially, and remove their ability to race for the rest of the season.

The MBA students are given a set of historical data, which plots a handful of different engine failures at different temperatures. They know that the race will take place under conditions which are **40 degrees Fahrenheit. **

Teams have to decide whether to race or not… and a wide variety of the teams decide to pull the trigger. There’s no clear correlation between the engine results and the temperatures they operate at for the six races.

The professor listens to the pro and con arguments… before asking the students “did any of you ask to get more data?”

The professor fills in the missing data… and suddenly it’s clear that any race under 50 degrees F is destined to fail. Anyone who has chosen to race is asking for engine failure.

It’s not unusual in a classroom that no students would ask for more data… but then comes a sinister twist: _the data is adapted directly from the Challenger expedition launch. _

Any teams who decided to launch made the same decision that NASA did under similar circumstances. The team was presented with just a subset of the data, found no correlation.

The lesson is clear, don’t be fooled by the data in front of you. Ask for more.

Letting go of your tools

It’s frequently an odd feeling for teams to abandon tools and new thinking that they’ve come to rely on. And yet, it’s often essential for getting to a successful outcome.

There’s a number of stories about forest firefighters who end up carrying their giant backpacks and chainsaws, only to be consumed by flames. The habit of carrying around their tools is so ingrained, that they lose sight of what’s really important.

There’s another story of an elite paratrooper force whose primary job was to run rescue missions. At some point, the calculus for the team lead doesn’t make sense to go into the mission. Going against the grain of every bit of “solidarity” and “commander should be on the ground”, the commanding officer stays behind. It rocked him (and other members of the troop) to their core, since it was such a fundamental teaching.

This also happened at NASA. NASA was so used to dealing with arguments backed by data… any qualitative argument was ignored until more data could be provided to support it.