Meaning Alignment Institute
Subscribe
Sign in
Home
Archive
Leaderboard
About
Latest
Top
Discussions
Looking for testers for a Social App
We are looking for testers for a social app we will use to perform research on AI-based market intermediaries. It will also help you get closer to your…
Aug 14
•
Oliver Klingefjord
and
Joe Edelman
14
4
July 2025
Introducing Full-Stack Alignment
We're announcing an ambitious research program to co-align AI systems and the institutions that embed them with what people actually value.
Jul 31
•
Oliver Klingefjord
,
Joe Edelman
, and
Ryan Lowe
22
2
June 2025
Market Intermediaries: A post-AGI Vision for the Economy
An outline of an economic mechanism for human flourishing after AGI. Also, a brief look at an experiment the Meaning Alignment Institute will run later…
Jun 21
•
Oliver Klingefjord
and
Joe Edelman
26
6
December 2024
Model Integrity
You may want compliance from an assistant, but not from a co-founder. You want a co-founder with integrity. We propose ‘model integrity’ as an…
Dec 5, 2024
•
Joe Edelman
and
Oliver Klingefjord
23
1
March 2024
What are human values, and how do we align to them?
We are excited to release our new paper on values alignment! Co-authored with Ryan Lowe, and funded by OpenAI.
Mar 29, 2024
•
Joe Edelman
,
Oliver Klingefjord
, and
Ryan Lowe
18
5
February 2024
David Shapiro Interview
And two other quick updates;
Feb 6, 2024
•
Oliver Klingefjord
and
Joe Edelman
6
December 2023
Year End Bonus: a GPT to help with your New Year's Resolutions
This time of year, many reflect on their values. What are yours? How can you weave your life around them?
Dec 31, 2023
•
Joe Edelman
9
2
Meaning Alignment Institute: Year in Review
And what's next for 2024
Dec 29, 2023
•
Oliver Klingefjord
and
Joe Edelman
12
October 2023
OpenAI x DFT: The First Moral Graph
Beyond Constitutional AI; Our first trial with 500 Americans; How democratic processes can generate an LLM we can trust.
Oct 24, 2023
•
Joe Edelman
and
Oliver Klingefjord
36
5
September 2023
Help us make ChatGPT wiser
Join our OpenAI-backed experiment to democratically fine-tune ChatGPT's values.
Sep 20, 2023
•
Joe Edelman
and
Oliver Klingefjord
6
12
August 2023
We are now "The Institute for Meaning Alignment"
Hello everyone!
Aug 29, 2023
•
Joe Edelman
10
Introducing Democratic Fine-Tuning
An alternative to Constitutional AI or simple RLHF-based approaches for fine-tuning LLMs based on moral information from diverse populations.
Aug 29, 2023
•
Joe Edelman
and
Oliver Klingefjord
25
2
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts