Privacy and AI

July 26, 2024
setal@ente.io

In case you’re unaware, Meta launched Llama3.1 405B parameters model and made the weights available for public usage with a fairly generous license. Of course, Meta has been doing this with the older versions of Llama as well. But given the benchmark results of the model, and how it measures against GPT 4o and Claude 3.5, it feels like a massive strategic move

Why?

  • With the open weights, anyone can offer a consumer and enterprise product that is competitive with Open AI and Anthropic at a significantly lower price, thereby undercutting their business models
  • With an almost parity with Open AI and Anthropic (at least for the time being), the competitive advantage moves from model performance to distribution - something Meta has a massive advantage over its competitors

Meta has already started using this advantage - with Meta AI getting integrated with its existing products. Moreover, with its integration on Meta Rayban Glasses and Quest, it is also able to build a very strong footing in what Meta has always considered a weak strategic point - control over the hardware/device ecosystem

It is not difficult to see why this seems like a big move from Meta, and why a lot of people are calling them the new leader in the AI race after the launch

If one joins the dots, it is easy to see this as a terrifying news for the privacy community

  • Meta’s primary business (and a large cash cow) is advertisement, which gets bigger and better the more deeply it knows about its users
  • Consumer Value from AI products will increase rapidly as it knows more and more about the user’s life and preferences
  • Meta is probably one of the top 3 data collectors in the world (others being Google and Bytedance)

If implemented well, this leads to a positive feedback loop, where one’s use of Llama on Meta products leads to more data collection which improves the product to pull you in deeper, while also making Meta more money. All at the cost of the user’s privacy which continues to diminish as Meta knows more and more details about the user

So, are there alternatives for users who care about their privacy? Yes, but they are far from perfect at this point in time

The most known one is how Apple has marketed they are approaching this. Basically, use a combination of small models on device for a majority of use cases For others use mid-size cloud based models for the remaining such that no data is logged. In the extreme case, rely on large cloud based models - like OpenAI in case of Apple. There are obvious issues with this - the main ones being that the user is not aware which requests are getting served locally vs which ones are going to cloud; and the continuous verifiability of what is happening with the data that is sent to the cloud. Of course, open source codebase will help solve this - but Apple is unlikely to do that

The main reason, Apple has to take such a convoluted path is because devices are not at a state where all requests can run locally. The new Siri is not even expected to work on iPhone 15 due to this limitation. Though, one positive that will come out of this is showing that small models can take care of pretty much all consumer use cases, and if hardware improves quickly enough, all AI compute can happen on device - which is the ideal solution to the problem. Things seem to be going on the right path - both Apple and Qualcomm are aggressively investing to get their SoCs more and more powerful. WASM is allowing compute to be used efficiently by web application. And because of this, products like WebLLM, GPT4All are now usable in some of the latest consumer hardware. However, there are still a lot of IFs if this is going to be a sustainable path for the long run - largely depending on hardware improvements to keep up with consumer requirements for AI based products

Another alternative path is for encryption tech to evolve such that you can run compute on encrypted input to produce encrypted output without ever having to decrypt the data. This, theoretically, allows for encrypted requests to be send to untrusted cloud based models. However, this is still a theoretical construct. While it can be used for simple NN models, the compute required to train an LLM with this mechanism is believed to be impossible right now. And even if this is solved, the inference would be slow enough to be non-usable. So unless something dramatic happens, this is only a pipe dream

As you can see, there are no great privacy guaranteed solutions to this problem right now. However, with Apple jumping into this, and given their stance on privacy, one can expect significant improvements in hardware such that on device AI for all usecases becomes a reality. In the meantime, just be careful about what you’re sharing with Meta and other AI products