A mistake is simply another way of doing things.
The words of Katharine Graham, “A mistake is simply another way of doing things,” shine with the wisdom of resilience. She does not see the mistake as the end of the path, but as a hidden teacher, showing a road not first intended, but one no less real. To fall is not to be ruined, but to be given another way to rise. Thus, what seems at first failure may, in truth, be the forge in which strength and innovation are born.
The ancients knew this lesson well. They told of inventors, explorers, and philosophers who stumbled often, yet in their stumbling found discovery. The alchemists of old, seeking gold, uncovered medicines instead; the sailor who lost his course found new lands. So it is with the soul—error, endured with humility, becomes the tutor of wisdom. The one who never errs never learns; the one who dares, fails, and rises again, grows mighty.
History bears this out. Consider Thomas Edison, who when asked about his many failed attempts at creating the light bulb, replied that he had not failed, but found “ten thousand ways that would not work.” Each mistake was another way of doing things, another step toward the glow that would one day banish darkness from the homes of millions. His story reveals the essence of Graham’s truth: what others call failure may be the very path to triumph.
Graham herself embodied this resilience. As publisher of The Washington Post, she faced storms of doubt and error, most famously during the Pentagon Papers and Watergate. Decisions that seemed perilous, even mistaken, became turning points for both journalism and democracy. Her courage to embrace uncertainty and even error as part of the process of truth-telling gave her legacy its lasting power.
Let the generations remember this: do not fear the mistake, for it is not a wall but a doorway. Each misstep is a signpost, pointing to another way, another lesson, another path of discovery. The wise do not curse their errors, but transform them into stepping-stones. For in truth, every mistake is but another way of doing things—and sometimes, it is the only way to arrive at greatness.
VDBui van Dao
For creativity, the idea feels liberating. Many breakthroughs emerge from side paths no one planned. So how do we design for productive surprises? I’m thinking: run small, cheap experiments; keep decision cadence fast; instrument everything; and reward documentation of the unexpected, not just the intended win. What metrics signal we’re learning, not just flailing—cycle time between iterations, ratio of hypotheses to validated insights, or number of retired dead-ends? Closed question: should reviews score teams on insight quality rather than outcome alone?
TPThao Phuong
I love the optimism, but some contexts punish mistakes brutally—aviation, medicine, cybersecurity. Not every detour is harmless. Maybe the key is a risk taxonomy: reversible vs. irreversible, low vs. high blast radius, known unknowns vs. unknown unknowns. Then you match safeguards to category—simulation first, peer review, staged rollouts, or hard stop rules. Open question: would you endorse a default of “sandbox, then scale,” where we deliberately confine early errors to safe environments before exposing real users or patients?
AAnh
This resonates with my inner perfectionist. Reframing blunders as information lowers shame and makes me more curious. But I don’t want that mindset to become an excuse. How do you pair self-compassion with accountability? I’m trying a weekly debrief: capture one misjudgment, a root cause, and a tiny safeguard. For leaders, modeling matters—tell the story, show the fix, and credit the team. Closed question: do you think executives should publish a quarterly “what we learned” memo that includes their own errors?
HNPham hai nam
As a reader, I hear a pragmatic nudge to metabolize errors into useful data. Yet I worry about the line between healthy experimentation and sloppy thinking. What habits actually convert missteps into progress—tight feedback loops, checklists, and short retros with one actionable change each? Do you keep a personal “error log” that tracks pattern repeats and mitigation plans? Closed question: should teams adopt explicit error budgets so small failures are expected and studied, while still drawing a bright line around unacceptable risks?