How can we make The Open Reinforcement License?
How can you get people to provide reinforcement for modern Foundation Models with the guarentee that their info isn't being used to support one of the toxic platforms?
Nevermind Foundation Models, content platforms put reinforcement learning to use for very effective optimization. Surveillance creates great RL policies for recommending engaging content, for example– but its afforded by a toxic business model.
Just because we can formulate this question doesn't mean there's a solution. Who would people want their data to go to, in a perfect world? Maybe if anonymized we're okay with it going to scientists.
There are so many levers of possible control, its very likely we just haven't gotten creative enough yet.
- Exactly what data are we talking about
- It's my device, why shouldn't it collect behavioral data on my behalf?
- There's no reason to believe that decent anonymization is impossible– the only problem is it makes the data less useful for targeting customers.
All these new structures of the internet haven't matured yet; I think we'll come around to the necessity of publicly funded indeces, and exclusively open collection of behavioral data.