Crypto

Bittensor wants to be the Bitcoin of AI, says xTAO founder



Karia Samaroo, founder and CEO of xTAO, the only public company working on the Bittensor ecosystem, explains why AI needs to be decentralized.

Summary

  • Decentralized AI offers resilience that big tech can’t, says xTAO CEO Karia Samaroo
  • Bittensor rewards AI models that perform well, working on “pure capitalism”
  • Users want more transparency when it comes to what goes into AI models

Artificial intelligence has captured the public’s imagination like few technologies ever have. However, underneath its potential lie serious concerns about the concentration of power and control. Currently, the most popular AI models are the exclusive property of several big tech firms, which have full control over their design and usage.

Crypto.news spoke with Karia Samaroo, founder and CEO of xTAO, a public company working on the Bittensor (TAO) decentralized AI ecosystem. Samaroo explained why there is a need for an alternative model that makes AI more open, decentralized, and in line with the demands of its users.

crypto.news: What does blockchain bring to AI, and what role does Bittensor play?

Karia Samaroo: Centralization is AI’s biggest problem. As AI grows into the most powerful tool humanity has ever created, having just a few companies control it creates a huge concentration risk. I often compare Bittensor to Bitcoin. Bitcoin solved the centralization problem around money: it can’t be inflated, anyone can access it, and there are no gatekeepers. Bittensor applies the same idea to AI.

With centralized AI, like OpenAI, one authority decides how models are trained, what data they use, what biases they have, and what they censor. They can also cut off access at any time. That’s a big issue. Bittensor uses Bitcoin’s model to solve this for AI.

CN: How are companies introducing decentralization to AI?

KS: There are a few good examples of decentralized AI solutions. Grass incentivizes data collection, though it focuses on one piece of the AI stack. Render is a decentralized compute network, which is also very important.

Bittensor is broader. I’d call it a “worldwide web of AI.” It doesn’t just focus on one area like data or compute. It has multiple subnetworks, each addressing a different problem in the AI stack, and they’re all interconnected.

CN: Why do companies build on Bittensor instead of going to more established models like OpenAI?

KS: I think there are a couple of reasons. One is philosophical. A lot of people building on Bittensor see the value in contributing to a decentralized network and to the decentralized AI mission. There’s definitely a lot of attraction around that.

The other is technical. There are advantages to scalability in a decentralized network. Bitcoin, for example, has created the world’s largest computer through its incentive mechanism. It’s so widely distributed that it can never be shut down because of how many nodes are operating in different places, on different networks and power supplies.

And then there’s this concept of open innovation. Anyone can experiment, iterate, and monetize their models without gatekeepers. If you’re an AI engineer, normally you’d have to apply for jobs, go through interviews, get hired, and then end up working on a very specific task inside that company. On Bittensor, you can just pick a subnet you want to mine on, build your model, compete with others, and get paid instantly for doing that.

CN: AI models run by big tech firms benefit from large amounts of data, like Grok has Twitter. How does decentralized AI compete?

KS: I think Grass is a good example, and there are similar projects on Bittensor. The idea is to crowdsource data and incentivize people to collect and curate it. That network has grown really significantly. That’s how decentralized networks can bring in equal or even better quality datasets. Big tech controls the richest data today, but with the right incentives, decentralized systems can compete.

Another big issue is that when Meta or Twitter owns your data, you get nothing back. As a contributor, you’re not rewarded. Decentralized networks flip that. They align incentives with creators and contributors, the way it should be. If you take a photo, you should be credited. If you make a post, you should benefit from it.

CN: How does decentralized AI tackle the issue of the safety and social impact of its models?

KS: There are a few aspects to safety. One is the training data. If it’s biased, toxic, or contains sensitive information, that’s a problem, and it’s true for both centralized and decentralized systems. That’s something people are working on every day.

Another is the outputs from the models. How do you prevent harmful outputs? In Bittensor, that’s handled by validators. They’re responsible for detecting harmful or low-quality outputs, and the better they do, the more rewards they earn. It’s baked into the network design.

There are also some monitoring policies from the foundation, but the goal is to phase that out. Over time, safety and governance really become the job of the validators.

CN: Are you concerned about censorship of these models going forward, either from governments or as a reaction to biased outputs?

KS: It’s a good question. I’d compare it to centralized or state-owned media, where a single decision-maker can choose what to show or not show. If they’re pressured or just decide internally, they can change what the outputs look like.

That’s a big problem. We already see it on social media. If Meta wants to push a certain narrative, they’ll do it. It’s not necessarily evil — it’s just how incentives work.

Decentralized AI is more representative of the people. It’s not perfect, but if a subnet or product on Bittensor becomes too biased, the participants in the network can vote and adjust the incentives. That means poor performance gets fewer rewards.

The idea is that if the system reflects the population, people will support the products that feel fair and transparent. And it’s more auditable — you can see the incentive structures, you can see the code. With closed systems, you can’t. That’s what worries people about centralized AI.



Source link