Public Posts

User: salman
salman · cards.hiper.freezr · 2/8/2026, 6:04:08 PM
very deep
darioamodei.com - View Hiper Card
Confronting and Overcoming the Risks of Powerful AI
Highlights
"
Models inherit a vast range of humanlike motivations or “personas” from pre-training (when they are trained on a large volume of human work). Post-training is believed to select one or more of these personas more so than it focuses the model on a de novo goal, and can also teach the model how (via what process) it should carry out its tasks, rather than necessarily leaving it to derive means (i.e., power seeking) purely from ends
"
"
The concern is that there is some risk (far from a certainty, but some risk) that AI becomes a much more powerful version of such a person, due to getting something wrong about its very complex training process.
"
"
Recall that these AI models are grown rather than built
"
"
For example, in the last 2 years, AI models went from barely being able to complete a single line of code, to writing all or almost all of the code for some people—including engineers at Anthropic.
"
"
AI isn’t a substitute for specific human jobs but rather a general labor substitute for humans.
"
Posted by salman on 2/8/2026, 6:04:09 PM
salman · cards.hiper.freezr · 4/27/2025, 10:54:30 AM
This relates to my post on agentic AI: The uncomfortable realm of explicit moral values .
www.axios.com - By Sam Sabin View Hiper Card
Managing those AI identities will require companies to completely reassess their cybersecurity strategies.
Highlights
Posted by salman on 4/27/2025, 10:54:32 AM
salman · com.salmanff.poster · 4/20/2025, 9:36:04 AM
salman · com.salmanff.poster · 4/20/2025, 9:36:04 AM
salman · com.salmanff.poster · 4/20/2025, 9:35:50 AM
With increasing access to data, AI can start to represent us, in both senses of the word: both acting as our agent and as a representation of our being, a sort of AI-twin. What implications will that have? (This is a third post on some of existential questions related to AI.)
...
labels:
salman · cards.hiper.freezr · 3/1/2025, 7:58:30 AM
Tim is brilliant (as usual) in this post
Highlights
Posted by salman on 3/1/2025, 7:58:32 AM
salman · com.salmanff.poster · 2/3/2025, 10:03:18 PM
salman · com.salmanff.poster · 2/3/2025, 10:03:10 PM
As AI takes on more and more agentic actions, it will run into digital versions of the trolley problem on its way, where it would have to draw on the core set of values embedded within it to decide on a best course of action. This will force those values to have to be stated explicitly.
...
labels:
salman · com.salmanff.poster · 2/1/2025, 9:24:34 PM
If a perfect replica of Sam Altman’s brain could be created, would that be considered a human being? What if, like the ship of Theseus, we took this in steps, and replaced Sam’s brain part by part, substituting more and more of his brain with a computer version. At what point does he stop being a human being?
...
labels: @sama
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · com.salmanff.poster · 1/25/2025, 7:38:30 AM
salman · cards.hiper.freezr · 10/1/2024, 6:36:51 PM
Great thinking and analysis, ultimately pointing to the incompatibility of current privacy laws with the new world of AI.
vischer.com - View Hiper Card
17 July 2024 - We have explained the technical aspects of a large language model in part 17 of our blog. But what conclusions can we draw from this in terms ...
Highlights
"
One possible solution to this problem is the use of so-called confidence thresholds, i.e. the systems are programmed in such a way that they either only produce an answer if the systems are rather certain of it or they indicate how certain they are of the individual statements. In the case of deterministic AI – i.e. systems that specialise in recognising or classifying certain things – such values are commonly used. In the field of generative AI, however, this is not yet very common. In our view, it should be used more often. For example, a chatbot can be programmed so that it only provides an answer if it is relatively certain of it. It is, however, not clear how high the probability must be for something to be considered (supposed) fact instead of (supposed) fiction.
"
Posted by salman on 10/1/2024, 6:36:52 PM
salman · com.salmanff.vulog · 7/28/2024, 7:40:20 AM
Disorders due to inbreeding - another quasi human trait of AI. 🙃
Research suggests use of computer-made ‘synthetic data’ to train top AI models could lead to nonsensical results in future
Highlights
Posted by salman on 7/28/2024, 7:40:21 AM
Key words:
salman · com.salmanff.vulog · 7/16/2024, 8:10:24 PM
I always thought AI will surface a lot of interesting philosophical questions about what it means to be human, and what intelligence is. But I have rarely come across pieces that tackle these issues intelligently. This article does.
Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.
Highlights
These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.
The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.
The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.
This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.
To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out? If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.
It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.
Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.
Posted by salman on 7/16/2024, 8:12:49 PM
Key words:
Load More