Homo Deus: What did you learn?
Bliv bruger af LibraryThing, hvis du vil skrive et indlæg
Dette emne er markeret som "i hvile"—det seneste indlæg er mere end 90 dage gammel. Du kan vække emnet til live ved at poste et indlæg.
Harari points out that while collectively we know quite a deal (though far from everything, to be sure), individually we can only know a teeny tiny subset of all that. I take that to mean that as individuals, or even in the "wrong" groups , those which lack some essential bit of knowledge)or who are committed to some incompatible cultural meme, we can be "dumb as a box of rocks".
There was definite overlap between the two books by Yuval Noah Harari. To focus on Homo Deus, what I learned is to pay attention to algorithms, "arguably the most important concept in our world." I was shocked to look into the future abyss of human beings as nonessential and useless, with a few enhanced and upgraded humans likely to treat us the way we now treat animals. I take heart that science has "no clue" about consciousness, and resist the reduction of human experience to accommodate technology, such as agreeing that robots can care for our elderly in our place etc. I learned a great deal of the importance of the inter-subjective webs of meaning, our commonly accepted stories. I was provoked by the idea that economic growth is so essential even at the cost of so much stuff, obesity, and climate catastrophe. What are we to do with ourselves? How will we treat each other? The author's final 3 key questions I feel earned thoughtful responses. Personally, I do not believe life is just data processing, consciousness is far more valuable than intelligence, and as for the third question, it is unnerving to not know what will be the result when "non-conscious but highly intelligent algorithms know us better than we know ourselves."
I've not posted before on this site, so please forgive my mistakes etc
One thing that I question about the future of artificial intelligence (AI) is that algorithms have a purpose. Will AI have an overall purpose without us? Can we keep control, even in the face of unintended consequences?
Readers may be interested to know that in Vanity Fair's April 2017 issue, there is an article about Elon Musk's worries about the consequences of AI. He fears that we may be creating something without knowing the consequences. I worry too, because I think that the people developing technology, assuming that they are well intended, often fail to recognize the possible bad uses of their inventions. One other super geek said that he expects AI to replace us, and he is fine with that.
Science fiction, of course deals with this. In E. M. Foner's light-hearted EarthCent Ambassador series, the AI Stryx take various planets under their wing as their technology reaches a certain level of sophistication. The Stryx, generally benevolent but a trifle underhanded, loosely govern a series of space stations where the very diverse intelligent species meet. The Stryx know nothing about their Makers. Eventually, one, Dring, comes to check up on things. He explains that the Makers were in a war that they expected to lose to other AIs who were at war with biological beings, and they created the Stryx not so much as weapons, but as surrogate children to leave something of themselves behind. They actually won the war, but the Stryx then applied themselves so assiduously to taking care of their Makers that the latter fled for parts unknown.
In the Admiral John Geary series by Jack Campbell, the Alliance built a fleet of AI battleships that they lost control off. The AI ships have destroyed two star systems and show no signs of stopping. Oops! Campbell has human soldiers in his universe instead of robots, because any system can be hacked.