To Adapt to Tech, We’re Heading Into the Shadows

More and more innovation requires going into darker, more inappropriate, less ethical territory. Here’s how to respond.
Surgeons performing surgery with robots
The old playbook on how to create and adapt to technical change is now out of date. It’s time to rewrite it.Photograph: Dana Neely/Getty Images

The way we tell it, surgery was butchery before Joseph Lister. In 1872, two out of every five compound fracture victims treated by surgeons in Germany subsequently died from infection. At the country’s best hospital in Munich, 80 percent of all wounds were infected by hospital gangrene. “Horrible was our trade!” a surgeon later declared. Everyone was desperate for a solution.

Then Lister invented antisepsis—a technique involving carbolic acid that stopped surgical patients from getting life-threatening infections. Millions of lives were saved as a result, and Lister is now heralded as the father of modern surgery. But getting there—refining and scaling this technique to hospitals around the western world—meant that he and many other doctors broke with convention. Mostly, this meant doing unusual, unconventional, or provocative things—in a gray area between right and wrong, appropriate and inappropriate, ethical and unethical. Occasionally it went dark: They broke their oaths, hid data, and killed people.

This started with Lister. In one of his earliest and most influential works, he outlines his failed treatment of “Patrick F,” a patient with a compound leg fracture (one of a number of such failures). He applied a carbolic acid solution to the wound, but Patrick contracted gangrene. Lister noted that the caustic effects of his own antiseptic measure was a major—if not the sole—cause. We have no record of Lister securing colleagues’ or Patrick’s consent for this procedure, and at the time Lister did not have knowledge of proper technique, dosage, tools, and methods for assessing progress, given his intervention. He just tried it out. Patrick’s leg had to be amputated.

Groups of desperate doctors picked up this technique based on imperfect information and tried it out for themselves on patients. The burns from this acid and the smothering dressings often accelerated gangrene—the very infection this technique was meant to treat. And gangrene was almost always fatal.

Most technical progress hasn’t been so dark, and the results speak for themselves. Lister and other surgeons mostly saved lives, perfected their technique, and published papers about their successes and failures. It’s a similar story with Curie and radium. Adrià and gastronomy. Tesla and electricity. If they were carefully observed, most of what they did might have gotten a skeptical stare or a stern rebuke from a colleague—not the ire and punishment from other professions, law enforcement, and society at large. Top management research, popular books, and expensive seminars tell us that critical innovation and adaptation is mostly achieved in the gray areas near the edges of what’s normal or appropriate. This is the gray ring in this diagram:

Data Visualization: Matt Beane

In these cases, nobody quite sees these benign-violation-type innovations, heroes, or collaborations coming. If we do, we don’t have the chutzpah, resources, or insight to go after them. The important point is “few animals are harmed” in the making of these technical leaps.

That’s shifting. For the past six years I’ve been studying how we’re adapting to work involving intelligent machines. All this work and many other studies show that more and more of our innovation and adaptation are headed into darker, more inappropriate, less ethical territory. The irony is that this is caused by the way we’re handling the latest crop of what I’ll call “intelligent technologies.”

In an increasingly connected world filling with cheap surveillance and conflicting standards on just about everything, it’s getting harder to invent, develop, commercialize, and adopt new technologies without running afoul of someone’s sense of propriety. People will still build and adapt to new tech, of course, but the old, benign, “forgiveness rather than permission” zone shrinks daily. People are turning to increasingly inappropriate and difficult-to-observe methods to get the job done. Today’s picture is looking more like this:

Data Visualization: Matt Beane

As you can see, the gray zone is shrinking. The old playbook on how to create and adapt to technical change is now out of date. It’s time to rewrite it.

To show you why this is happening, let me put you in a resident’s (surgical trainee) shoes at a nephrectomy—a surgery to remove a cancerous kidney.

If you were normal, you’d walk into the OR having spent the last bits of medical school watching a few procedures at a teaching hospital or two, practicing knots and suturing on a grape or a chicken breast, and revisiting your urological anatomy notes. At best, you were in what’s called “see one” mode—learning by watching. For your first batch of nephrectomies, you’d shift to the beginnings of “do one” mode: doing the easier, safer parts of the procedure with the senior attending surgeon looking on. When you got to the difficult, risky parts of the work—like exposing the renal arteries and veins so they could be clamped—you’d shift to a more supporting role, perhaps suctioning or holding a retractor. Eventually you’re in “teach one” mode, guiding junior residents on the easy stuff. This is the approved way for you to learn the ropes—you’re necessary at almost every step of the game and you learn by doing more and more over time.

This setup allows for all kinds of “gray area” adaptation too. People might think you’re a bit odd or irritating, but you can practice your knots on a subway pole, eat your lunch while practicing dissection in the cadaver lab, and pester your attending surgeon during a procedure to let you do more. Your community will see these actions as benign violations as you try to adapt.

Given this, your robotics rotation comes as a rude awakening: The entire da Vinci system—and therefore the entire procedure—can be controlled by one person at a time. This means your involvement in the surgical work is entirely optional. Not only are you not really fluent with this glorified videogame controller, you are working with a surgeon who knows that if they give you all-or-nothing control of this beast, you will be much slower and make more mistakes than they would. Moreover, every action you take is broadcast to the attending physician's console and onto large high-definition TVs. The attending, nurses, scrub, and anesthesiologist can see and judge it all. Put together, it means the attending will barely let you operate, and will “helicopter teach” you when they do, and nurses and scrubs will spread the word that you suck to other attendings. You’re stuck in “see one” mode for most of your residency. After four or five years of trying to learn the approved way, you’ve barely gotten to work through an entire robotic surgery yet are legally empowered to use this tool wherever you land.

Building embodied skill in a high-status profession is just one tiny slice of how we adapt and innovate, given new technology—but the reasons for the rise of productive deviance are clear here and evident in many other industries, ranging from policing to chip design to journalism.

What happened here? In search of major leaps in productivity, we’ve created and deployed intelligent technologies. These allow for two things: much higher-quality, more widely shared scrutiny of the work by more people, and for a single expert to take much more precise and complete control of the actions required to get the job done. On the surface, this is fantastic—allowing each expert to make better use of their talents and for a team of diverse professionals to coordinate much more fluidly. Under the surface, it blocks you from learning in the “See, do, teach” pathway that’s been the approved default for a long, long time. And gray area options aren’t really available—you don’t have legitimate access to the system before your training starts. And you aren’t going to even try to push your way into a procedure, because you know you don’t have the basic skill you need to be granted control of the thing, and so does that expert. They are just going to swat you down.

If you’re going to adapt—and about one in eight residents in my study did—you have to do so in really inappropriate ways.

If you’re one of the few who manage to get really good with the robot during your residency, you started getting practical exposure to it years in advance—when everyone (even you) would say it’s totally inappropriate. In undergrad or medical school, you hung around in labs when you should have been getting a generalist education. You spent hundreds of extra hours on the simulator or reviewing videos of robotic surgery on YouTube when you should have been spending time with patients. Then, after all this prep showed the attending, nurses, and scrubs that you could handle the da Vinci, you used this to get preferential access to procedures and, most importantly, to operate without an attending in the room. The more you did this, the better you got, and the more rope you were given by attending physicians. But every one of these steps was at best a serious, very concerning breach of standards for your profession and hospital operations—and in some cases maybe even against the law.

Let’s step back to think about adaptation to new tech in general. If you’re involved in work that exhibits the above characteristics, you’re going to turn to deviance to innovate and adapt. It would be one thing to argue this off one data set, but since I published my work on surgery, I’ve checked this across more than two dozen top-quality studies, and this pattern shows up in all of them.

Almost all work settings that deal with intelligent technologies have one overarching goal: Figure out how to get value out of the damn thing. For technologists this is more about how to design and build. For marketers and business development professionals, how to pitch, and to whom. For managers, when to buy and how to implement. For users, building and mastering new techniques. Over 80 years of social science tells us very clearly that if approved means won’t allow these interdependent professionals to innovate and adapt their way forward into this exponential technostorm, some percentage of them are going to turn to inappropriate means to do so.

We’ve wired up the globe with an interconnected system of cheap sensors—keyboards, touchscreens, cameras, GPS chips, fingerprint scanners, networks to transmit and store the data, and now, crucially, machine-learning-style algorithms to analyze and make predictions based on this data. Each year that we build out this infrastructure it gets radically easier to observe, analyze, judge, and control individual behavior—not just as workers but also as citizens. And work has gotten a lot more complex. Just a decade or two ago, the only authority that had any sway in complex work was the expert on the scene. Now we’ve got a host of professionals and paraprofessionals with distinct expertise that get a say in how the work is going and who should be rewarded and punished. This comes via formal mechanisms like 360-degree performance reviews but also informally: Who gets to decide whether a professor is pacing her lectures appropriately, or whether a beat cop is taking too long to report back as they reach their patrol destinations? Or whether any of us was adapting or innovating appropriately? Ten years ago, the answer was basically one person. Now it can be many, including those who have access offsite and after the fact. Anyone can call foul, and all of them are empowered with massive new sources of rich data and predictive analytics.

All this means that the gray area is shrinking. Few people prefer to innovate and adapt in ways that risk catastrophe or punishment—but some will turn in this direction when they know that approved means will fail. Like it or not, more and more critical innovation and adaptation will be happening in areas of social life previously reserved for “capital D” deviants, criminals, and ne’er-do-wells. Leaders, organizations, groups, and individuals that get wise to this new reality will get ahead.

But how? How can we look into the shadows to find these sketchy entrepreneurs, understand their practices, and capitalize on them while maintaining a sense of trust in our critical values?

Here are some questions to ask yourself, drawn from early indicators I’ve seen on the front lines of work involving intelligent machines:

Can you exercise surveillance restraint? Sometimes your organization, team, or even a single coworker will adapt more productively if you leave stones unturned and cameras off. To take just a tiny step in this direction in a robotic surgery, this might mean turning off the TVs while a resident is operating. You might want to do this kind of thing earlier on in residents’ training to give them space to make minor mistakes and to struggle without the entire room coming to a snap judgment about their capability. It’s that kind of early judgment that leads residents to conclude they have to learn away from prying eyes.

The broader point is that there’s a certain point at which surveillance, analysis, prediction, and control stops yielding returns: not because the data or predictions are wrong, but because you are destroying the underobserved spaces where people feel free to experiment, fail, and think through a problem. Moreover, excessive surveillance, quantification, and predictive analytics can drive the work experience down the toilet. Rolling this back will be exceptionally difficult in cultures or organizations that prize technical progress and data-based decisionmaking.

Ironically, it may be companies like Google that struggle the most here. You’ll need strong leadership and binding choices not to install or activate technical infrastructure like cameras or keystroke-tracking software. In regulation-heavy, surveillance-resistant places like Europe, all of this will probably be easier. Regardless, announcing this choice and the rationale for it makes leadership accountable, and it can increase trust and encourage experimentation.

What new tactics are you going to need to find innovation? Relinquishing intelligent technologies isn’t always practical or wise, so you’re probably going to have to find sources of boots-on-the-ground insight. None of the attending surgeons I studied could correctly explain how their (rare) successful trainees were actually learning. And these attendings did a lot of what we’ve all learned to do from management experts: They asked, dropped in on other procedures, watched while actually collaborating with students, and talked with each other across hospitals. This left them with lots of explanations that focused on abstract things like innate ability, drive, and curiosity. And no facts—none of the actual tactics that all these shadow learners were using to get ahead.

If you knew an employee was finding an organization-saving innovation by breaking the law, how do you get the details? They won’t want to be watched, least of all by an authority figure. And they’re probably not going to leave normal trace data. One way is the way we got them about robotic surgery: through a substantially neutral third party (me, in this case). The more diverse groups and sites they cover, the easier it will be for them to surface it without naming specific individuals and organizations. The key here is that this individual should be in a place to pay a terrible price if they disclose their sources—so normal consultants won’t cut it. Another way to get this kind of insight is to shadow across other contexts that do work like your home base, but again you’d have to do this in a way that made it clear that anyone talking to you or being observed had strong protections from being revealed.

How can we use intelligent technologies to help? Right now, we’re handling intelligent technologies in a way that drives innovation and productive adaptation into the darker recesses of organizations and social life. We need to flip this. To start with easier targets, ask: What patterns will show up in the data that’s already being collected as people adapt and get value in spite of rules? How could we use AI to make new, valuable sense out of that data? In most programs, surgical residents were required to use a robotic surgical simulator—a simplistic videogame to practice using the physical console to control very basic robotic surgical actions—putting rings on pegs, passing through hoops, and so on. The best surgical trainees used this tool between ten and a hundred times as much as their “normal” (i.e., struggling) peers did. Revealing who these people are through examining the user logs isn’t particularly valuable. But my studies suggested that they used these tools in very specific, important, and unusual ways—starting with generalized practice years in advance and ending with targeted “pregame” drills to rehearse for tomorrow’s procedure. This is just the tip of the iceberg. We could train several different machine-learning algorithms to analyze their simulator usage to distinguish it from normal trainees, which would help us learn how they learned, but also to predict learner outcomes from initial simulator use.

Moving forward, we should ask: What can we build? What new kinds of data, interfaces, and processes could we build to help with the proliferation of compliance pressure? How can managers and workers use these to enhance trust and reduce bias? Two people can actually share control of the DaVinci robot. In principle, this was designed to allow for “driver’s ed” style education—the attending can use the brakes while the resident steers and uses the gas. But it’s almost never done. But you could radically revise and expand this kind of capability. The visual display and control prompts in the system could be changed to encourage attendings to share control in a way that was tailored to each specific resident—on their AI-analyzed performance on the simulator, success in annotating videos of prior surgeries, the list goes on. And this is just the tip of the tip of this innovation and adaptation iceberg. The point is that we can find out what to build by learning from the early, deviant individuals who are risking catastrophe and punishment to find innovative ways forward. Then, instead of being an obstacle to innovation, intelligent technologies can make adaptation easier for those of us who aren’t willing or able to head into such dark territory to get ahead.

Lister and his disciples mostly developed and spread antisepsis through benign breaches of expectation—applying an untested acidic poultice here, treating an otherwise terminal patient without informing them there. These semi-inappropriate tactics worked because this process was only lightly surveilled by the powers that be. From then until recently, spotty surveillance has offered important, semi-appropriate means of generating innovation and productive adaptation to radical technical change. That’s changing. We have developed an insatiable appetite for predictive analytics and fine-grained control, fed by increasingly diverse and cheap sensors and a culture that seeks to quantify just about everything. This gives us a wide range of profound, positive benefits. It also means that if we do nothing to adjust course, getting world-improving results from technology may ironically mean that we will find ourselves breaking more rules and causing more harm than Lister could have dreamed possible. We have to do better.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories