CAPTCHA
A dictionary of connotations;
interior lives we have to consider
and reciprocally negotiate.
I want diverse, broad, and eclectic;
all it wants to give me is focused and narrow.
It's trying to make us more egocentric.
If I want to get along well with the world
I must negotiate with the world.
Occasionally go rogue to make things up;
marvel at the mystery;
you don't need to fully understand;
no one knows why.
They are not emotionally intelligent or smart
in any meaningful or recognizably human sense of the word,
do not think and feel but instead mimic and mirror.
The very point of friendship is that it is not personalized;
friends are humans whose interior lives
we have to consider and reciprocally negotiate
rather than mere vessels for our own self-actualization.
Cogitate in was or will be;
exist forever in the realm of the speculative,
the counterfactual, and the fictional.
Of death, remorse, and emotional trauma;
of domestic squabbles, workplace gossip, and things neighbors did that annoyed.
Tales of woe, titillating secrets, and overblown regrets.
Humans are quite poor at deception detection,
usually distracted by other things.
None of these factors reliably account for competence or trustworthiness.
It’s good to remember that we are both very bad at, and little concerned with, the truth.
Times of intense technicolor happiness and times that were sordid and frightening;
vivid, fleshy, and sensual; poignant, authentic, and true.
Recognizably human.
Perhaps . . .
You know what I think would be a helpful communication tool? A dictionary of connotations. Not something to explain the official, agreed-upon, intersubjective meanings of words, but something to explain the emotions and associations and assumptions they uniquely imply and relate and evoke to each person I'm trying to communicate with. The personal, custom layers of meaning each different person attaches to each word I'm considering using. Knowing that would vastly improve my ability to convey the ideas I'm going for.
This captures the current behavior of social media feeds perfectly. I try to use social media to explore and interact with the world beyond myself, but the AI algorithms note every little interaction, from clicking to lingering, and automatically assume I want more of anything that appears to get the least little bit of my attention. Much more. I live in fear of showing curiosity about the wrong thing and having it come to dominate my online experience.
I just want to be able to be curious and explore lots of different things just a little bit without the feed thinking anything I look at is my new all-consuming obsession. I want diverse, broad, and eclectic and all it wants to give me is focused and narrow.
More importantly, the feed's focus is entirely egocentric; what it wants to give me is a reflection of myself. The algorithms' main purpose seems to be understanding me as well as they are able, then building an insular bubble around me that reinforces and magnifies anything I might think, believe, or value--and sheltering me from anything different. They feed on confirmation bias. Try to make a customized, personal world for me to live in that is centered entirely on me.
But the world is not centered entirely on me and my thoughts, beliefs, and values. I am small and finite. My perspective and abilities are limited. If I want to get along well with the world, I must negotiate with the world. There are some small parts I might be able to influence and even control, but thinking I can make the world conform to my desires is folly; instead, I must learn to adapt myself to what the world offers.
AI is being designed with exactly the opposite mindset, and it's starting to influence the way we see ourselves in relation to the world. It's trying to teach us we have far more control than we actually do. It's trying to make us more egocentric.
The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work.Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do. . . .None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it. . . .LLMs are massive neural networks — like a brain — that ingest massive amounts of information (much of the internet) to learn to generate answers. The engineers know what they're setting in motion, and what data sources they draw on. But the LLM's size — the sheer inhuman number of variables in each choice of "best next word" it makes — means even the experts can't explain exactly why it chooses to say anything in particular. . . .Google's Sundar Pichai — and really all of the big AI company CEOs — argue that humans will learn to better understand how these machines work and find clever, if yet unknown ways, to control them and "improve lives." The companies all have big research and safety teams, and a huge incentive to tame the technologies if they want to ever realize their full value.After all, no one will trust a machine that makes stuff up or threatens them. But, as of today, they do both — and no one knows why.
No one will trust a machine that makes stuff up or threatens them. But, as of today, they do both — and no one knows why.
AI is not what its developers are selling it as: a new class of thinking—and, soon, feeling—machines. . . .Large language models do not, cannot, and will not “understand” anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another. . . .LLMs do not think and feel but instead mimic and mirror. . . .People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain’s tendency to associate language with thinking: “We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.”Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that “ChatGPT is my therapist—it’s more qualified than any human could be.”Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, “In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.” The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.
AI may be able to be what you want, but is what you want what you need?
This is from 2021 and I'm curious to know if its still accurate in relation to what has developed since, but it's nevertheless interesting.
. . . Causal reasoning is the neural root of tomorrow-dreaming teased at this article's beginning. It's our brain's ability to think: this-leads-to-that. It can be based on some data or no data or even go against all data. And it's such an automatic outcome of our neuronal anatomy that from the moment we're born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.But as natural as causal reasoning feels to us, computers can't do it. That's because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term "equation" implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.This feature of A equals Z means that computers can't think in A causes Z. The closest they can get is "if-then" statements such as: "If Bob bought this toothpaste, then he will buy that toothbrush." This can look like causation but it's only correlation. Bob buying toothpaste doesn't *cause* him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.Computers, for all their intelligence, cannot grasp this. Judea Pearl, the computer scientist whose groundbreaking work in Al led to the development of Bayesian networks, has chronicled that the if then brains of computers see no meaningful difference between Bob buying a toothbrush because he bought toothpaste and Bob buying a toothbrush because he wants clean teeth. In the language of the ALU's transistors, the two equate to the very same thing.This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world.And they cannot write literature.LITERATURE IS A WONDERWORK of imaginative weird and dynamic variety. But at the bottom of its strange and branching multiplicity is an engine of causal reasoning. The engine we call narrative.Narrative cranks out chains of this-leads-to-that. Those chains form literature's story plots and character motives, bringing into being the events of the Iliad and the soliloquies of Hamlet. And those chains also comprise the literary device known as the narrator, which (as narrative theorists from the Chicago School onward have shown) generate novelistic style and poetic voice, creating the postmodern flair of "Rashomon" and the fierce lyricism of I Know Why the Caged Bird Sings.No matter how nonlogical, irrational, or even madly surreal literature may feel, it hums with narrative logics of cause-and-effect. When Gabriel García Márquez begins One Hundred Years of Solitude with a mind-bending scene of discovering ice, he's using story to explore the causes of Colombia's circular history. When William S. Burroughs dishes out delirious syntax in his opioid-memoir Naked Lunch--"his face torn like a broken film of lust and hungers of larval organs stirring"--he's using style to explore the effects of processing reality through the pistons of a junk-addled mind.Narrative's technologies of plot, character, style, and voice are why, as Ramus discerned all those centuries ago, literature can plug into our neurons to accelerate our causal reasonings, empowering Angels in America to propel us into empathy, The Left Hand of Darkness to speed us into imagining alternate worlds, and a single scrap of Nas, "I never sleep, because sleep is the cousin of death," to catapult us into grasping the anxious mind-set of the street.None of this narrative think-work can be done by computers, because their AND-OR-NOT logic cannot run sequences of cause-and-effect. And that inability is why no computer will ever pen a short story, no matter how many pages of Annie Proulx or O. Henry are fed into its data banks. Nor will a computer ever author an Emmy-winning television series, no matter how many Fleabag scripts its silicon circuits digest.The best that computers can do is spit out word soups. Those word soups are syllogistically equivalent to literature. But they're narratively different. As our brains can instantly discern, the verbal emissions of computers have no literary style or poetic voice. They lack coherent plots or psychologically comprehensible characters. They leave our neurons unmoved.This isn't to say that Al is dumb; Al's rigorous circuitry and prodigious data capacity make it far smarter than us at Aristotelian logic. Nor is it to say that we humans possess some metaphysical creative essence-like freewill that computers lack. Our brains are also machines, just ones with a different base mechanism.But it is to say that there's a dimension--the narrative dimension of time--that exists beyond the ALU's mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.Our thoughts in time aren't necessarily right, good, or true in fact, strictly speaking, since time lies outside the syllogism's timeless purview, none of our this-leads-to-that musings qualify as candidates for rightness, goodness, or truth. They exist forever in the realm of the speculative, the counterfactual, and the fictional. But even so, their temporality allows our mortal brain to do things that the superpowered NOR/NAND gates of computers never will. Things like plan, experiment, and dream.Things like write the world's worst novels and the greatest ones, too.
The engine we call narrative, our brains think in that dimension.
While Munira had never been much of a people person, she now spent her days hearing strangers' most personal secrets. They came to her because she was a good listener, and because she had no social ties that might make their little confessions awkward. Munira didn't even know she had become a "professional confidant" until it showed up on her ID, replacing "librarian" as her profession. Apparently personal confidants were much in demand everywhere since the Thunderhead went silent. Used to be that people confided in the Thunderhead. It was supportive, nonjudgmental, and its advice was always the right advice. Without it, people found themselves bereft of a sympathetic ear.Munira was not sympathetic, and not all that supportive, but she had learned from Loriana how to suffer fools politely, for Loriana was always dealing with imbeciles who thought they knew better than her. Munira's clients weren't imbeciles for the most part, but they talked about a whole lot of nothing. She supposed listening to them wasn't all that different from reading the scythe journals in the stacks of the Library of Alexandria. A bit less depressing, of course, because while scythes spoke of death, remorse, and the emotional trauma of gleaning, ordinary people spoke of domestic squabbles, workplace gossip, and the things their neighbors did that annoyed them. Even so, Munira enjoyed listening to their tales of woe, titillating secrets, and overblown regrets. Then she would send them on their merry way, leaving them a little less burdened.
From The Toll (Arc of a Scythe, #3) by Neal Shusterman
Humans have been shown, again and again, to be especially bad at telling truth from falsehood. As Shieber puts it, “Despite many decades of research, the findings are remarkably consistent in demonstrating that humans are quite poor at deception detection.” If we have a built-in lie detector, it’s hugely inaccurate, often turned off, and usually distracted by other things.We also aren’t very good at telling whether someone is competent. . . . We wrongly think that someone with the right face is competent, or that someone who walks, talks, and holds themselves a certain way can reveal their ability. In reality, none of these factors reliably account for competence or trustworthiness.So, we are left with two facts. We are vigilant about what people are saying, but our vigilance is not based on epistemic grounds. So, what kind of vigilance is it?For that, Shieber coined the expression “The Nietzsche Thesis.” He argues that “our goal in conversation is not primarily to acquire truthful information… [but] self-presentation.” In other words, we accept or reject statements based on utilitarian goals, not on their truthfulness. In Nietzsche’s words, we will accept and look for truth only when it has “pleasant, life-preserving consequences.” Conversely, we are hostile “to potentially harmful and destructive truths.” We do not have epistemic vigilance, but a Machiavellian one. . . .Shieber’s thesis raises huge questions not only for philosophy but also for law: If people aren’t wired for the truth, then how reliable is testimony? It’s also an important point to remember in our interactions with each other as well as in what we read, hear, or see online. It’s good to remember that we are both very bad at, and little concerned with, the truth. Far more often, we’re concerned with other, non-epistemic, things.
Brings to mind Talking to Strangers as reviewed and extensively quoted in my last post.
Never have I more appreciated the short story form nor more enjoyed a short story collection. Berlin is an amazing writer, and here portrays so much humanity in all its beauty and ugliness with compassion and acceptance; so powerfully captures the human experience. That sounds like the end of a story, or the beginning, when really it was just a part of the years that were to come, she writes near the end of "So Long,"Times of intense technicolor happiness and times that were sordid and frightening.Her writing is vivid, fleshy, and sensual; poignant, authentic, and true.
The stories are labeled fiction, but they are clearly autobiographical and tell fragments of a life that was varied, rich, traumatic, joyous, and overflowing with experiences. They tell of life lived in many parts of the Western Hemisphere, from north to south. Of living with privilege. Of living in squalor. The highs and lows of addiction. Love and passion. The clash of cultures and economic classes. Religion. Rehabilitation, jail, homelessness; hard jobs, easy wealth, parenting, education; abortion, illness, bodily functions. Love and passion.
So much passion, for all aspects of life.The only reason I have lived so long is that I let go of my past. Shut the door on grief on regret on remorse. If I let them in, just one self-indulgent crack, whap, the door will fling open gales of pain ripping through my heart blinding my eyes with shame breaking cups and bottles knocking down jars shattering windows stumbling bloody spilled sugar and broken glass terrified gagging until with a final shudder and sob I shut the heavy door. Pick up the pieces one more time.From "Homing."
Truly compelling, savory, and satisfying reading.
Stories about life.
Alison Baileyto write a poemfirstit must survive a kindergarten schoolyard trauma, a sunburn on an overcast day,bury, in a small paper box that once held a bar of soap,the thumbnail-sized frog that was once a polliwog it caught at Mrs. Anderson’spond whose tail fell off and hind legs emerged like quotation marks & hadbeen kept in the rinsed Best Foods mayonnaise jarmust worry a tobacco-stained grandfather’s handrun over a jackrabbit on I-40 in the Arizona desertget divorcedburn dinnerconfess its sinssuffer food poisoningrefuse to eat blue M&M’shang, on a sweet-breezy July, laundry in Fishtail, Montana—eye the distant SawtoothMountains & hum “Waltzing Matilda” which it learned from Miss Vineyardin second grademust fear thunderrush to focus its binoculars on the wintering Lazuli Buntingtell white lies to be kindshout “Heavens to Betsy!”be part of a standing ovationendure recurring nightmaresquestion the crossing guard about the origin of “fingers crossed”develop calluses as it learns to play the twelve-string banjohave its hair smell of campfire smokeswat, during a humid-summer dusk, at mosquitoes on a dock full of splinteredcypress wood at Half Moon Lake in Eau Claire, Wisconsinforever dislike Brussels sprouts because it overcooked them and they smelled likerotten eggsmust watch windweep at a funerallose anythingimagine infinitydoubt God’s existencedie a little every daythen, perhaps——from Ekphrastic Challenge

0 Comments:
Post a Comment
<< Home