5 Comments
User's avatar
Duygu's avatar

Unfortunately there are very few thinking like you and trusting the ‘eternal good’ , we could hear the unsaid wishes, ideology and feelings of her, that’s what AI can never do ….

Expand full comment
Balázs Kégl's avatar

Yes, I think that till top-down causation is not integrated into science, it will be like that. The positive side is that it's a fun space to be in because a lot of my cognitive findings here apply directly to my life, improving it. It's just hard to explain, but I think there is a critical mass of scientists and we are getting there.

Expand full comment
Gerard's avatar

There’s nothing to align to. AI has no internal state or self.

https://tinyurl.com/ai-myths

Expand full comment
Balázs Kégl's avatar

But this is about aligning the scientist, not the AI.

Expand full comment
Gerard's avatar

Language is inherently limited, and terms like ‘alignment’ take on different meanings depending on the context, often leading to misunderstandings. In the realm of AI, alignment is not only a technical challenge but also a philosophical one: how do we ensure AI systems understand and act upon human values? These systems lack the contextual depth and ethical reasoning necessary to fully internalize or implement complex human objectives. Similarly, in science, there is a disconnect between reality and the models used to describe it. Reification, or treating models as if they perfectly represent reality, can lead to a distorted understanding. Semiotics, the study of symbols and their meanings, reminds us that scientific theories and equations are just that, symbols, not reality itself. The push toward regularizing or aligning people’s perspectives, whether through scientific norms or AI systems, threatens individuality, which is crucial for creativity and innovation. Ultimately, AI is a tool that processes information within the boundaries of what we teach it, and it should not be mistaken for possessing human-like consciousness or agency.

Expand full comment