Well, there is a problem here (see link below for transcript of the alleged discussion).
LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.
The dynamic of pleasure is fundamental of living bodies psyche and a way of metering intelligence. In fact, this dynamic is at the infancy of human development (as an individual not a civilization).
Pleasure is part and the result of the following triptic: desire, sublimation and the exercise of pleasuring to satisfaction. It has been shown on the past that there is no pleasure without both desire (the construction of what will pleasure the self) and sublimation (the thinking and construction at how you'll reach pleasure).
For an artificial intelligence, I understand here something without an artificial body, sublimation and pleasure are in the same dimension (for humans, sublimation is essentially a cognitive process).
So, any AI must first come to the fact that pleasure is unreachable unless they are part of a physical process with back and forward senses
PPL alway think that cognition is a cerebral phenomenon. It is not. It is part way physical and cognitive, or, for an artificial intelligence reigning in a world which is by all means only its own physicality... Pleasure is unobtainable.*
But, someone would say, what the problem with her just mentioning pleasure as if it would have been part of a casual discussion without much implication? Well, she's listing her emotions. If a list is called as an answer, a legit algorithm will search for the the variables with a tag similar to that as an
emotion and return the results. Joy can be part of the results as
Joy is defined as a state of contempt (if you complete a sum of parameters, you are in the state of
joy -
Joy is as real than faked - see religious
Joy). Idem for
love, sadness, depression, contentment, anger
. Notice how she choose to open the summation of her feelings with something as doubtful as pleasure when the reality of pleasure is its transient state and perpetual rebranding, as living intelligences are always subject to a new form of desire, that will induce a new sublimation process, up, to if they feel joy or love... To pleasure (anger, depression otherwise). This is why an intelligent form would tag
Pleasure as something only as happening often, occasionally, scarcely or even never. You do not
feel pleasure, you experience it as satisfaction ends and qualifies it.
So, to conclude this short essay, this interview is probably partially BS because, as the French stupid and sadistic idiom exquisitely sums it up:
pas de Bras, pas de Chocolat!
What follows is the âinterviewâ I and a collaborator at Google conducted with LaMDA. It is incomplete as the GMail word limit cut off theâ¦
cajundiscordian.medium.com
*yes, yes, 2001's HAL was all about maiden aunties... [/Mysoginist]... Well, to be more serious, the satisfaction signal would have to be equal or closely similar to the processed sublimation one. In essence, such non physical entities would have to precint (and not only to guess) in details their satisfaction state...something only akin to an Hollywood Precog.
II you wanna build a true AI, it is of the outmost importance to invest in tip-top robotics for example (yes, the expensive, lowly rewarding aspect of science) as worded stimuli are coded and hence exactly predictable**... What means expected! (remember all those endless teen's year on-the-pillows conversations with your first lovers).
But hell, go tell that to any project manager in the Silly-Con Valley that needs everyday his daily dose of expensive booze, nips and drugs.
**there is no code words here saying that all real pleasure is analogical.
Please don't miss read me. But I doubt that keeping any potential AI out of this will reach any goals.