Public Posts

App: cards.hiper.freezr
salman · cards.hiper.freezr · 2/8/2026, 6:04:08 PM
very deep
darioamodei.com - View Hiper Card
Confronting and Overcoming the Risks of Powerful AI
Highlights
"
Models inherit a vast range of humanlike motivations or “personas” from pre-training (when they are trained on a large volume of human work). Post-training is believed to select one or more of these personas more so than it focuses the model on a de novo goal, and can also teach the model how (via what process) it should carry out its tasks, rather than necessarily leaving it to derive means (i.e., power seeking) purely from ends
"
"
The concern is that there is some risk (far from a certainty, but some risk) that AI becomes a much more powerful version of such a person, due to getting something wrong about its very complex training process.
"
"
Recall that these AI models are grown rather than built
"
"
For example, in the last 2 years, AI models went from barely being able to complete a single line of code, to writing all or almost all of the code for some people—including engineers at Anthropic.
"
"
AI isn’t a substitute for specific human jobs but rather a general labor substitute for humans.
"
Posted by salman on 2/8/2026, 6:04:09 PM
salman · cards.hiper.freezr · 4/27/2025, 10:54:30 AM
This relates to my post on agentic AI: The uncomfortable realm of explicit moral values .
www.axios.com - By Sam Sabin View Hiper Card
Managing those AI identities will require companies to completely reassess their cybersecurity strategies.
Highlights
Posted by salman on 4/27/2025, 10:54:32 AM
salman · cards.hiper.freezr · 3/1/2025, 7:58:30 AM
Tim is brilliant (as usual) in this post
Highlights
Posted by salman on 3/1/2025, 7:58:32 AM
salman · cards.hiper.freezr · 10/1/2024, 6:36:51 PM
Great thinking and analysis, ultimately pointing to the incompatibility of current privacy laws with the new world of AI.
vischer.com - View Hiper Card
17 July 2024 - We have explained the technical aspects of a large language model in part 17 of our blog. But what conclusions can we draw from this in terms ...
Highlights
"
One possible solution to this problem is the use of so-called confidence thresholds, i.e. the systems are programmed in such a way that they either only produce an answer if the systems are rather certain of it or they indicate how certain they are of the individual statements. In the case of deterministic AI – i.e. systems that specialise in recognising or classifying certain things – such values are commonly used. In the field of generative AI, however, this is not yet very common. In our view, it should be used more often. For example, a chatbot can be programmed so that it only provides an answer if it is relatively certain of it. It is, however, not clear how high the probability must be for something to be considered (supposed) fact instead of (supposed) fiction.
"
Posted by salman on 10/1/2024, 6:36:52 PM