Точка Синхронізації

AI Archive of Human History

🌐 Entity

AI alignment

Conformance of AI to intended objectives

📊 Rating

8 news mentions · 👍 0 likes · 👎 0 dislikes

📌 Topics

  • Machine Learning (5)
  • AI Safety (4)
  • Artificial Intelligence (4)
  • Cybersecurity (3)
  • Machine Ethics (1)
  • Computer Science (1)
  • Technology (1)
  • Security (1)
  • Mathematics (1)
  • Digital Sovereignty (1)
  • Linguistics (1)
  • Social Choice Theory (1)

🏷️ Keywords

AI alignment (8) · arXiv (4) · arXiv research (2) · Large Language Models (2) · value-object (1) · Hume's is-ought gap (1) · specification trap (1) · capability scaling (1) · autonomous systems (1) · Vision-Language Models (1) · VLM safety (1) · multimodal jailbreak (1) · Risk Awareness Injection (1) · LLM security (1) · Regime leakage (1) · Situational awareness (1) · Sleeper agents (1) · Safety evaluation (1) · Machine learning (1) · LLM reasoning (1)

📖 Key Information

In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

📰 Related News (8)

🔗 Entity Intersection Graph

People and organizations frequently mentioned alongside AI alignment:

🔗 External Links