>>
Posted by salman on 4/27/2025
This relates to my post on agentic AI: The uncomfortable realm of explicit moral values .
Managing those AI identities will require companies to completely reassess their cybersecurity strategies.
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 4/20/2025
With increasing access to data, AI can start to represent us, in both senses of the word: both acting as our agent and as a representation of our being, a sort of AI-twin. What implications will that have? (This is a third post on some of existential questions related to AI.)
...
labels:
>>
Posted by salman on 3/1/2025
Tim is brilliant (as usual) in this post
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 2/3/2025
As AI takes on more and more agentic actions, it will run into digital versions of the trolley problem on its way, where it would have to draw on the core set of values embedded within it to decide on a best course of action. This will force those values to have to be stated explicitly.
...
labels:
>>
Posted by salman on 2/1/2025
If a perfect replica of Sam Altman’s brain could be created, would that be considered a human being? What if, like the ship of Theseus, we took this in steps, and replaced Sam’s brain part by part, substituting more and more of his brain with a computer version. At what point does he stop being a human being?
...
labels: @sama
>>
Posted by salman on 10/1/2024
Great thinking and analysis, ultimately pointing to the incompatibility of current privacy laws with the new world of AI.
17 July 2024 - We have explained the technical aspects of a large language model in part 17 of our blog. But what conclusions can we draw from this in terms ...
Highlights
One possible solution to this problem is the use of so-called confidence thresholds, i.e. the systems are programmed in such a way that they either only produce an answer if the systems are rather certain of it or they indicate how certain they are of the individual statements. In the case of deterministic AI – i.e. systems that specialise in recognising or classifying certain things – such values are commonly used. In the field of generative AI, however, this is not yet very common. In our view, it should be used more often. For example, a chatbot can be programmed so that it only provides an answer if it is relatively certain of it. It is, however, not clear how high the probability must be for something to be considered (supposed) fact instead of (supposed) fiction.
Posted by salman on
Key words:
>>
Posted by salman on 7/28/2024
Disorders due to inbreeding - another quasi human trait of AI. 🙃
Research suggests use of computer-made ‘synthetic data’ to train top AI models could lead to nonsensical results in future
Highlights
Posted by salman on
Key words:
>>
Posted by salman on 7/16/2024
I always thought AI will surface a lot of interesting philosophical questions about what it means to be human, and what intelligence is. But I have rarely come across pieces that tackle these issues intelligently. This article does.
Grief-laden vitriol directed at AI fails to help us understand paths to better futures that are neither utopian nor dystopian, but open to radically weird possibilities.
Highlights
These insights don’t change the fundamental realities of the natural world — they reveal it to be something very different than what our intuitions and cultural cosmologies previously taught us. That revealing is the crux of the trauma. All the stages of grief are in response to the slow and then sudden fragmentation of previously foundational cultural beliefs. Like the death of a loved one, the death of a belief is profoundly painful.
The premise is that modern governments as we know them are the executives of the transformations to come and not an institutional form that will be overhauled if not absorbed by them. For better or worse, the latter scenario may be more plausible.
The leap of faith that human values are self-evident, methodologically discoverable and actionable, constructive, and universal is the fragile foundation of the alignment project. It balances on the idea that it will be possible to identify common concerns, to poll communities about their values and conduct studies about the ethics of possible consumer products, that it will be possible and desirable to ensure that the intelligence earthquake is as comfortable as possible for as many people as possible in as many ways as possible.
This stage of grief clings to the hope that if we start bargaining with the future then the future will have no choice but to meet us halfway. If only.
To what extent is the human artificialization of intelligence via language (as for an LLM) a new technique for making machine intelligence, and to what extent is it a discovery of a generic quality of intelligence, one that was going to work eventually, whenever somebody somewhere got around to figuring it out? If the latter, then AI is a lot less contingent, less sociomorphic, than it appears. Great minds are necessary to stitch the pieces, but eventually somebody was going to do it. Its inventors are less Promethean super-geniuses than just the people who happened to be there when some intrinsic aspect of intelligence was functionally demystified.
It does mean, however, that human intelligence is not what human intelligence thought it was all this time. It is both something we possess but which possesses us even more. It exists not just in individual brains, but even more so in the durable structures of communication between them, for example, in the form of language.
Like “life,” intelligence is modular, flexible and scalar, extending to the ingenious work of subcellular living machines and through the depths of evolutionary time. It also extends to much larger aggregations, of which each of us is a part, and also an instance. There is no reason to believe that the story would or should end with us; eschatology is useless. The evolution of intelligence does not peak with one terraforming species of nomadic primates.
Posted by salman on
Key words:
>>
Posted by salman on 5/10/2023
Sal Khan, the founder and CEO of Khan Academy, thinks artificial intelligence could spark the greatest positive transformation education has ever seen. He shares the opportunities he sees for students and educators to collaborate with AI tools -- including the potential of a personal AI tutor for every student and an AI teaching assistant for every teacher -- and demos some exciting new features for their educational chatbot, Khanmigo.
Highlights
Posted by salman on
Key words: ted talks technology education ai teaching kids
>>
Posted by salman on 12/21/2022
The wave of enthusiasm around generative networks feels like another Imagenet moment - a step change in what ‘AI’ can do that could generalise far beyond the cool demos. What can it create, and where are the humans in the loop?
Highlights
Instead of people trying to write rules for the machine to apply to data, we give the data and the answers to the machine and it calculates the rules. This works tremendously well, and generalises far beyond images, but comes with the inherent limitation that such systems have no structural understanding of the question - they don’t necessarily have any concept of eyes or legs, let alone ‘cats’. 
If I ask for ‘the chest burster scheme in Alien as directed by Wes Anderson’ and get a 92% accurate output, no-one will complain that Sigourney Weaver had a different hair style. But if I ask for some JavaScript, or a contract, I might get a ‘98% accurate’ result that looks a lot like the JavaScript I asked for, but the 2% error might break the whole thing. To put this another way, some kinds of request don’t really have wrong answers, some can be roughly right, and some can only be precisely right or wrong, and cannot be ‘98% correct’.
Yahoo tried paying people to catalogue the entire web one site at a time, and that was unscalable. Google, on one side, is based on the patterns of aggregate human behaviour of the web, and on the other side it gives you ten results and makes you pick one - manual curation by billions of users. The index is made by machine, but the corpus it indexes is made by people and the results are chosen by people. In much the same way, generative networks, so far, rely on one side on patterns in things that people already created, and on the other on people having new ideas to type into the prompt and picking the ones that are good
But the other side of this is that ML gives you not infinite interns but one intern with super-human speed and memory - one intern who can listen to a billion calls and say ‘you know, after 300m calls, I noticed a pattern you didn’t know about…’
Actually what ML gives you is not only depth (ie the ability to “listen to - billion calls”) but also breadth: the ability to read a thousand books on psychology and persuasion tactics and other unrelated subjects and tie them into its conclusions. What is human ingenuity other than adding patterns from two seemingly unrelated subjects together? Perhaps ML will be brilliant - or perhaps our human ingenuity is just much simpler than assumed .
Posted by salman on
Key words:
More