
Should we be playing it safe with AI? To mitigate risks, some argue that companies should set limits on what decisions can be made while collaborating with AI agents. But this creates a fundamental paradox: how can companies unlock the truly innovative ideas while limiting the AI that powers them?
In this HBR article, Mark Purdy argues that agentic AI promises greater workforce specialization, enhanced innovation through experimentation, and increased trust in reliable sources. He's right. This technology is poised to reshape work by streamlining repetitive tasks, enhancing decision-making, and opening new avenues for human-AI collaboration.
Purdy also says that AI agents could shift organizational structures and job roles, requiring companies to invest in upskilling "innovators" in their workforce. That's prescient.
We've been inspired by this way of thinking and have completely realigned our emdata.ai business model to use AI agents to solve real-world problems. As I write in other posts, we think this technology has the potential to turn millions of ambitious "moonshot" ideas into reality.
Yet to ensure responsible and fair AI agent implementation, he cautions companies to limit the decisions that come as a result of AI-human collaboration. Particularly in an environment where AI agents take initiative, make their own decisions, and act autonomously in pursuit of specific objectives, he says managers should "scaffold" human-AI decisions to prevent problems.
But should we limit agentic AI because of some perceived risks? I don't think so. Instead of playing it safe, I prefer to go big in AI without putting guardrails around those "moonshot" ideas. Disruption requires boldness, not caution.
Tony McGovern
Tony McGovern is Founder and Data Scientist at emdata.ai.