The Seedling and the Wind: On Resistance, Force Multiplication, and What to Tell an Eighteen-Year-Old in the Age of AI
Ilya Sobol, MD
A gardener knows something that most futurists do not: a seedling raised entirely indoors will die the moment it meets the open sky. Not because the world is cruel, but because the plant was never hardened. In horticulture, hardening is the deliberate, graduated exposure of a young plant to wind, temperature swings, and direct sunlight. The wind, in particular, does something remarkable. It stresses the stem just enough to trigger genetic changes that produce structural proteins, thickening and strengthening the tissue from the inside out. Without that periodic stress, the stem remains brittle. It looks alive, but it has never truly prepared itself to stand on its own.
This is not a metaphor I am shoehorning into a conversation about technology. It is the foundational principle I keep returning to as I watch artificial intelligence reshape the landscape of work, ambition, and meaning. Life—all of it, from the cellular to the civilizational—runs on resistance. The discomfort you feel when you are learning something that does not come naturally. The failures that recalibrate your intuition. The push-ups that micro-tear muscle fiber so it rebuilds stronger. Even the newborn, fragile and dependent, must eventually encounter friction with the world in order to develop into something durable. Resistance is not an obstacle to growth. It is the mechanism of growth.
And that is why the popular vision of an AI-driven future—the one where every job is automated, everyone receives a universal income, and humanity drifts into comfortable irrelevance—troubles me. Not because the technology is not powerful. It is. But because a species that removes all resistance from its own existence does not ascend. It atrophies. If you have ever seen the film WALL-E, you will recall the image of bloated humans gliding through a spaceship on hovering recliners, every need anticipated and delivered by machines, every capacity to act for themselves long since withered. It is played for gentle comedy, but it is the logical endpoint of a world that confuses comfort with flourishing.
I have been watching the AI movement closely since the first major wave of transformer-based large language models hit the public in late 2022 and early 2023. The question that has consumed me from the beginning is not whether AI is impressive—it is—but rather: where, precisely, does the boundary lie? In what functions can an AI, whether operating inside a computer or eventually within a robotic body, perform a task better, faster, or cheaper than a human? And where does the human remain irreplaceable?
The loudest voices in the conversation tend to flatten this into a binary: either AI replaces everything, or it is overhyped and changes nothing. Both positions are wrong, and the inability to hold two competing truths simultaneously is itself one of the great limitations of human cognition. Consider the dot-com bubble. Yes, it was a speculative mania. Yes, people lost fortunes. And yes, the underlying technology was entirely real. The companies that survived—Google, Amazon, Microsoft—went on to become the most powerful enterprises on Earth. Two things were true at once: it was a bubble, and it was a revolution. The same duality applies to artificial intelligence today. The hype is overblown in its timeline predictions, and the technology is genuinely transformational. Holding both of those truths requires nuance, and nuance is hard.
My own experience offers a useful window into where the boundary currently falls. As a surgeon, educator, and business owner working in healthcare, I have watched AI attempt to integrate into one of the most complex and consequential industries on the planet. The results are instructive because they illustrate a pattern that extends far beyond medicine.
What AI does brilliantly right now is structure unstructured data. In a clinical context, that means taking the scattered, fragmented, often contradictory mess of a patient’s medical record—spread across multiple electronic systems, buried under years of documentation—and curating it into something coherent and navigable. A large language model, combined with semantic search, can mine that data and present it to me on a silver platter before I walk into the exam room. Critics will say that AI makes mistakes. This is true. What those critics often do not understand is how many mistakes the current human-only system already produces. Documentation errors, data access failures, information overload—the baseline is far from perfect. An AI system that curates and surfaces information, even imperfectly, is already an improvement over the status quo, precisely because it provides a structured starting point that I can corroborate rather than a chaotic haystack I must search from scratch.
But here is where the boundary becomes vivid. After the AI has curated the data, after I have reviewed and corroborated it, I walk into the room. And in that room, something happens that no language model can replicate. I see the patient. I see their family. I read the micro-movements of their eyes, the tension in their posture, the things they say and the things they conspicuously do not say. In medicine, we call this the eyeball test, but it is more than that. It is every sense firing at once—visual, auditory, even olfactory. Experienced clinicians can often tell within seconds whether a patient is truly sick or essentially well, and that judgment integrates information that exists nowhere in the chart. It is an act of embodied cognition, of emotional intelligence operating in real time, in physical space, with a living human being who is frightened or confused or in pain.
This is the part that will be hardest for AI to replicate, and I believe it will remain so for far longer than the optimists predict. Not five to ten years. Fifty to a hundred. The reasons are not merely technical; they are fundamentally about the nature of physical reality. Every human body is different. Every encounter is different. The ability to adapt dynamically in three-dimensional space, with fine motor control, in response to infinite variability—this is something biological organisms have spent hundreds of millions of years evolving to do. I work with surgical robots daily. They are extraordinary instruments, but they are one hundred percent controlled by human surgeons. I have seen the hype about autonomous surgery. I live in that world. It is not close.
So what is AI, if not a replacement? It is a force multiplier. This is the metaphor that I think captures the truth most precisely. Think of the difference between digging a ditch with a shovel and digging it with an excavator. In both cases, a human is in control. In both cases, the human’s skill, judgment, and intention determine the outcome. But the magnitude of what that human can accomplish is radically different. AI is the excavator for thought work. It does not think for you. It amplifies your thinking. And this distinction has a crucial corollary: if your thinking is poor, AI amplifies poor thinking. If your fundamentals are weak, force multiplication produces a larger version of weakness. Garbage in, garbage out—but at scale.
This is why the popular fantasy of simply handing your work to an AI and collecting the output is so dangerous. What you get when you type a vague prompt into a language model and accept the first response without critical engagement is not intelligence. It is slop. It is the intellectual equivalent of fast food: engineered to look and feel satisfying, but devoid of the substance that would actually nourish you. The people who will thrive in an AI-augmented world are not the ones who outsource their thinking. They are the ones who bring sharp, well-formed thinking to the table and then use AI to extend its reach.
And here is the part that most people miss: everyone has access to the same frontier models. As the models improve, everyone improves. The rising tide lifts all boats. So the only durable source of differentiation is you—your depth of understanding, your domain expertise, your ability to ask the right questions, your judgment about when to trust the machine and when to override it. The AI is the multiplier. You are the variable. And a multiplier, no matter how large, cannot compensate for a variable that approaches zero.
If I were advising an eighteen-year-old today—someone standing at the threshold of adult life, trying to decide what to learn, what to become, how to position themselves in a world that feels like it is shifting under their feet—I would offer three pillars.
First, build your foundation.
Learn to read deeply and write clearly. I do not mean the mechanical ability to form letters on a page. I mean the capacity to think with sustained concentration, to wrestle with a complex idea until you have genuinely understood it, and then to condense that understanding into language precise enough to communicate it to another person. This is the hardest and most important skill you will ever develop, and it is the one most at risk in a culture saturated with distraction and shortcut.
Read the great books. Read philosophy. Read economics. Read history. Not because you will quote Seneca in a job interview, but because the act of engaging with difficult, long-form thought literally restructures the architecture of your mind. It teaches you to hold complexity, to tolerate ambiguity, to follow an argument across many pages and evaluate it honestly. These are the capacities that make you a powerful thinker, and powerful thinking is the raw material that AI multiplies.
If you cannot do this—if you cannot sit with a blank page and organize your own thoughts without reaching for a chatbot—then stop everything else and fix that first. Go sit in a quiet room with a typewriter if you have to. Because every interaction you ever have with AI will be limited by the quality of what you bring to it. Your prompts, your context, your discernment—these are the seeds. If the seeds are hollow, no amount of algorithmic fertilizer will make something meaningful grow. The form does not matter. Write it longhand, type it, dictate it into your phone while pacing your kitchen at six in the morning. What matters is that the ideas are yours, that you have labored over them, and that you can articulate them with enough clarity and structure to be genuinely useful.
Second, learn AI.
This may sound contradictory to the first pillar, but it is not. Both are true simultaneously—there is that duality again. AI is a tool of extraordinary power, and refusing to learn it is like refusing to learn electricity. You can have philosophical objections to how it is being deployed, and you can have legitimate concerns about its social consequences, and you should still learn how to use it, because the world will not wait for you to finish deliberating.
What you need to develop is a feel for the handoff points. Where can you productively delegate to an AI, and where must you do the work yourself? What is the point of diminishing returns on your own manual effort, beyond which the force multiplication of AI washes out any additional gain from doing it by hand? These are practical judgments, not theoretical ones, and you can only develop them through repeated use. Experiment relentlessly. Use AI to research, to draft, to organize, to pressure-test your ideas. But never accept the first output uncritically. The discipline of iterating—of pushing back, refining, asking better questions, demanding better answers—is itself a form of the resistance that makes you stronger.
Third, learn a complex physical skill.
This is the advice that will sound most counterintuitive in an age mesmerized by the digital, but I believe it is the most strategically important of the three. Learn something that requires you to work with your hands in the physical world, solving problems that are moderately to highly complex and that demand real-time adaptation to unpredictable conditions. Plumbing. Electrical work. Construction. Surgery. Nursing. Welding. The specific discipline matters less than the underlying characteristic: it must deeply interweave cognitive and physical work.
The reason is simple. Pure thought work—the kind done entirely at a keyboard, analyzing data, producing documents, managing spreadsheets—is the category most directly vulnerable to AI displacement. Not necessarily total replacement, but significant consolidation. Where three people once sat at computers performing knowledge work, you may soon find one person managing an army of AI agents. The jobs will not vanish overnight, but the leverage will shift, and the number of humans needed to produce a given unit of output will shrink.
Physical work, by contrast, remains extraordinarily difficult to automate. I say this not as a theorist but as someone who watches robots operate every day. The state of autonomous fine motor control in unstructured environments is nowhere near what the headlines suggest. A robot that can assemble identical components on a factory floor is impressive. A robot that can crawl behind a toilet in a house built in 1947, diagnose a corroded pipe fitting, discuss options with the homeowner, and execute a repair in a space it has never encountered before—that robot does not exist, and it will not exist for a very long time. The variability of the physical world is the moat. Every house is different. Every patient is different. Every job site is different. Biological organisms navigate this variability effortlessly because evolution has spent eons optimizing for exactly this kind of adaptive, embodied problem-solving.
The person who synthesizes all three pillars—foundational intelligence, AI fluency, and a complex physical skill—will be exceptionally well-positioned. Imagine an AI-augmented plumbing company run by someone who understands the trade, who can deploy AI for scheduling, diagnostics, customer communication, supply chain management, and business strategy, and who can also get under the sink and do the work. That person is not a plumber who happens to use software. They are a new kind of professional—someone who has woven together the human and the machine in a way that neither can replicate alone. The entrepreneurial opportunities in that synthesis are enormous, and they are largely unrecognized.
Beneath all of this practical advice is something more fundamental, something I keep circling back to. We are organisms. We are not disembodied intellects floating in digital space. We are creatures of flesh and bone and breath, and we require movement to survive—not just mental movement, but physical movement. You must move your mind and move your body. You must do both, because we are built to do both, and when either one stops, something essential begins to die.
The great danger of the AI era is not that machines will become too intelligent. It is that humans will become too passive. That we will mistake comfort for progress, convenience for flourishing, and output for understanding. That we will allow the tools to do our thinking and our striving for us, and in doing so, remove the very resistance that makes us capable of growth. The seedling that never feels the wind does not become a tree. It becomes kindling.
So yes, learn the tools. They are powerful, and they are real, and anyone who dismisses them is making a serious error. But do not let the tools learn you. Do not let them flatten you into a passive consumer of generated content, a human reduced to clicking “accept” on someone else’s—or something else’s—output. Maintain your capacity for original thought. Maintain your capacity for physical work. Maintain your capacity for discomfort, because discomfort is the price of growth, and growth is the point.
The future does not belong to the machines. It does not belong to the humans who refuse to use machines. It belongs to the humans who remain fully human—thinking, striving, adapting, building with their hands and their minds—while wielding machines as the magnificent tools they are. The wind is coming. Make sure your stem is strong.


What a well written analogy of not only AI and its function as a force multiplier but also how we humans must face the wind of change and discomfort and use it to grow and become stronger in mind, body and soul. Well done Ilya.
Well said Ilya
I love the analogy.
I learned the importance of hardening off plants and acclimating them. Even my barrel cactus would be severely sunburnt if I move it from direct winter sunlight in the house to a sunny spot outside without acclimating it.