We are now "The Institute for Meaning Alignment"
Hello everyone!
Many of you found us first through the talks we released last January, under the name “Rebuilding Meaning”.
Over the past months, it’s become clearer and clearer that, right now, AI is the biggest lever for us to build a meaning-centric society. LLMs already do a really good job at eliciting people’s values, which is essential for scale. And AI is fast becoming the electricity that lights up the post-industrial world, which means that we now have a unique opportunity to light it up with meaning.
So, we have a new name and website, as the Institute for Meaning Alignment, with a renewed mission statement: to help AI and humanity collaborate on human and planetary flourishing. This includes research on AI architectures and social choice mechanisms that can help us build wise, values-aligned AI, and upgrade our institutions.
We have three projects under this banner:
Our main focus right now is Democratic Fine-Tuning: DFT is an alternative to Constitutional AI or simple RLHF-based approaches that gathers moral information (values or sources of meaning) from diverse populations to shape LLM behavior. We just won the OpenAI Democratic Inputs to AI grant for it! (Read more on our blogpost today or on LessWrong.)
We actually need participants to articulate new values for train the models. We’ll send an invitation to try it out for yourself—and be the first to contribute your values to creating Wise AI—on this substack, next week.Wise AI: Wise AI is a new architectural vision for how to build AI to make artificial general intelligence that's as wise is as it is smart: that understands itself in situations of moral weight, and that can develop and elaborate its values in collaboration with human groups. (Read more)
The Meaning Economy Research Consortium (MERC) is our most ambitious bet in socio-technical alignment, where we research how AI-augmented economic mechanisms can be better for human flourishing than current markets and corporations, and can replace consumption and attention economies with economies of meaning. (Read more)
To support all of these initiatives, we’re also building a network of aligned people in AI and other sectors. So, we’ve been prototyping community structures and activities that connect people in service of a meaning-aligned world.
We’ve started with small experiments, and in private, to really nail the fundamentals. For instance, we recently hosted researchers for a “Summer of Meaning” residency in Berlin. It was a delight, with shared rituals and community-bonding experiences, plus great discussions on meaning metrics, on meaning and spirituality, etc—and of course long Berlin summer nights and spa recoveries the day after.
Our pivot from Rebuilding Meaning to Meaning Alignment has been, in many ways, a reduction in scope. It’s been rewarding, but also a big challenge. There were many ups and downs.
Thanks to everyone who provided support and enthusiasm, especially our friends, team and advisors that were part of the journey: Anne-Lorraine Selke, Aviv Ovadya, Ben Gabbai, Brian Christian, Geoff Anders, Ivan Vendrov, Kerry Vaughan, Jake Orthwein, Joel Lehman, Jordan Hall, Liv Boeree, Mark Estefanos, Max Heald, Morgan Sutherland, Oliver Klingejord, Ryan Lowe, and Shelby Stephens.
With love,
Ellie and Joe