In minutes, businesses can create conversational AI agents that will build and continually improve themselves across virtually any digital or physical channel SoundHound AI, a global leader in voice and agentic AI, announced the launch of OASYS (Orchestrated Agent System). This category-defining platform powers AI agents that build, learn, and proactively improve themselves autonomously, enabling businesses

Are We Entering the Age of Data Nihilism?
Every click, every photo, every search query we make creates a digital echo. These digital traces are the raw material fueling the AI revolution, powering technologies that are reshaping our world. Yet for the people creating it—all of us—this data has become functionally worthless.
The average internet user doesn’t think about the value of their data. They simply give it away to some of the wealthiest companies in the world, for free.
Because of this behavior, I fear we are living in an age of data nihilism, where our data means everything to AI developers yet almost nothing to us—not because our data actually is value-less, but because people feel powerless to stop it from being collected against their will.
When I first started my AI ethics research lab, many in the AI research community were skeptical of OpenAI’s early approach. Could they truly achieve AI that rivaled humans simply by scaling up data and computing power, without deeper theoretical insights? It seemed like a strategy based on capital rather than science.
OpenAI, however, had the last laugh. Their success proved a simple, if unsettling, formula: massive datasets plus immense computing power equals unprecedented AI capability. The global AI race quickly became, fundamentally, a data arms race.
The data-centric gold rush has historic roots, starting with the deep learning revolution of the 2010s, which was itself ignited by web-scraped datasets like ImageNet, showing that data availability could dramatically improve AI performance. But today’s scale is different, and so are the stakes. Ironically, the soaring value of AI has come at the direct expense of the data that fuels it. To win the AI race, companies have been incentivized to collect data with little regard for the rights of its creators—a mentality has been tacitly endorsed by regulators in the U.S., Japan, and India, who are willing to weaken data protections to accelerate national AI development.
This widespread disempowerment has given rise to a dangerous phenomenon: data nihilism. It’s the growing belief that our data has lost its meaning and value because we have lost all control over it. It is a resignation that living in an age of AI necessitates handing over full control of our data. When our digital lives are relentlessly mined without our consent or compensation, it is rational to feel that our data rights have evaporated. In fact, a 2023 Pew Research Center study found that despite 81% of Americans expressing concern about how companies use their data, 73% believe they have little to no control over it.
Data nihilism isn’t just a philosophical problem—it’s the blueprint for one of the greatest wealth transfers in modern history. AI acts as a giant funnel, siphoning the value from the data of billions of internet users and digital media, and concentrating the immense economic rewards in the hands of a few companies building foundation models. Not only is this a loss of privacy and intellectual property, but it is also a form of mass economic disenfranchisement.
Just as Nietzsche warned of the dangers of a nihilist moral void leading to societal decay, the current disregard for responsible data practices could erode trust in institutions and perpetuate systemic inequalities.
Not everyone, however, is accepting this massive redistribution of wealth and power without a fight. The creative industries are on the front lines, with authors, artists, and musicians filing dozens of lawsuits against major AI companies for copyright infringement. In parallel, a wave of litigation under privacy laws like Illinois’s Biometric Information Privacy Act (BIPA) is challenging the unauthorized use of our most personal data—our faces and voices.
This brings us to a critical crossroads: sacrifice our data rights for technological progress or protect them and fall behind in the global AI race. This is a false dichotomy. There is a third path: ethical innovation.
Data can and should be sourced for AI development with consent and fair compensation—in fact, my team showed how it could be done in practice. Moving forward, researchers should work with paid and consenting participants from around the world to build high-quality datasets that the AI community can use responsibly.
It is possible to build datasets for cutting-edge AI without compromising on individual rights. “Ethically sourced” should not be a barrier to innovation, but a hallmark of its quality and sustainability.
The next step is for the AI community and regulators to take ethical data curation seriously. The economic power dynamics between AI and humans will largely be determined at the data layer, and as a result, questions about consent and compensation mechanisms for data rights holders should be a major area of focus for AI researchers and regulators. Creating opt-in or opt-out schemes that provide meaningful control to people around the world whose data serves as AI’s raw materials is a challenging task, but one that is critical to address now. Moreover, as AI developers exhaust available data, future innovations will likely depend on the quality rather than simply the quantity of data.
Nietzsche’s cure for nihilism was to create personal meaning, but the scale of AI necessitates creating systems that affirm and protect the value of humanity’s contributions. We are now at a turning point: if we fail to build such protections, we will resign ourselves to a future where the benefits of AI are concentrated among a few, and the vast majority of people find their contributions worthless. The future of AI should not be built on a foundation of mass data appropriation. It must be built on a foundation of respect, consent, and shared value. The age of data nihilism is upon us; it is up to us to prevent it.
