Mike Deigan /
The Cursor /
Is AGI a Distraction from Real AI Ethics?
posted 2021-10-17

“AI ethics”AKA “ethics of com­put­ing”, “ethics of com­put­ing, data or in­for­ma­tion”, “the in­ter­sec­tion of phi­los­o­phy and new tech­nolo­gies”, etc. is clear­ly ‘A Thing’, as Tim Crane said ear­li­er this year. With lots of mon­ey and jobs be­ing thrown at this thing, it’s tempt­ing to be cyn­i­cal, ei­ther from a crowd-pleas­ing stone-throw­er’s dis­tance or else from up close, an­gling for one’s own piece of the pie. One might also be gen­uine­ly pleased, since there are good rea­sons to have more philoso­phers think­ing about this stuff. But it’s go­ing to be a thing whether one likes it or not; ei­ther way it is worth ask­ing what sorts of ques­tions these AI ethi­cists should be work­ing on.

Crane takes a view on this. He thinks that an­swers to spec­u­la­tive ques­tions about the ethics of ar­ti­fi­cial gen­er­al in­tel­li­gence (AGI) are “of no rel­e­vance to the real eth­i­cal ques­tions, and that they are a dis­trac­tion from real AI ethics.”

He doesn’t give any spe­cif­ic ex­am­ples of the real AI ethics ques­tions he thinks peo­ple should be work­ing on, but men­tions self-dri­ving cars, health­care, fi­nance, and law as ar­eas where such ques­tions arise. The dis­tract­ing sci-fi ques­tions are ones like “what should we do if the ma­chines be­come smarter than us? What hap­pens if AI ma­chines de­vel­op their own val­ues, and these val­ues con­flict with ours? How should we treat these AI ma­chines if they be­come con­scious? What should their moral sta­tus be?” Noth­ing wrong with spec­u­lat­ing about them, he al­lows, but they are of no real prac­ti­cal im­por­tance. To think oth­er­wise would be to let sci-fi ob­scure re­al­i­ty.

Views like Crane’s are com­mon,For in­stance, in the cur­rent NYRB is­sue (pay­walled, sor­ry), Sue Halpern dis­miss­es the fear that “AI sys­tems will ac­quire hu­man-lev­el in­tel­li­gence and even­tu­al­ly out­wit us” on the grounds that “even ma­chines that mas­ter the tasks they are trained to per­form can’t jump do­mains” since they are “trained on datasets that are, by de­f­i­n­i­tion, lim­it­ed”. but they are wrong.

***

To see why, con­sid­er two ques­tions:

  1. How like­ly is it that there will be AGI with­in, say, 200 years?
  2. How like­ly would it need to be for the “sci-fi” ques­tions to be worth think­ing about now?

With­out try­ing to put a num­ber on it, I think we should say that the an­swer to Ques­tion 1 is “Fair­ly like­ly.” Okay, okay, I’ll put a num­ber on it. I say at least 30%, and would be will­ing to go as high as 80%.It’s worth not­ing that my es­ti­mates are con­ser­v­a­tive com­pared to the me­di­an of the ma­chine learn­ing re­searchers’ an­swer­ing this sur­vey. Though see Hold­en Karnof­sky’s blog post for some rea­sons to be skep­ti­cal about such sur­veys. But also see his oth­er posts on AI fore­cast­ing. I wouldn’t nec­es­sar­i­ly think some­one who puts it at 20% is un­rea­son­able, but if you’re go­ing be­low 10% or es­pe­cial­ly be­low 1%, I want to hear your rea­son­ing.

And it had bet­ter not just be an ar­gu­ment about cur­rent work in AI miss­ing some cru­cial fea­ture re­quired for AGI, since we’d need an ad­di­tion­al ar­gu­ment to think that cru­cial fea­ture will re­main out of reach for the next 200 years.

Take a look at what Deep­MindHere is Halpern: “Al­pha­Go can best the most ac­com­plished Go play­er in the world, but it can’t play chess, let alone write mu­sic or dri­ve a car. Ma­chine learn­ing sys­tems, more­over, are trained on datasets that are, by de­f­i­n­i­tion, lim­it­ed. (If they weren’t, they would not be datasets.)” I guess she didn’t check what Al­pha­Go’s suc­ces­sors do. Al­pha­Zero (2017) not only mas­tered Go but also chess (and sho­gi) and was not trained on any dataset. Mu­Zero (2019) in ad­di­tion mas­tered 57 Atari games, with­out datasets or hand­cod­ed game rules. and Ope­nAI have been up to re­cent­ly. And now think about what turned out not to have been more than 200 years out of reach start­ing from 200 years ago. We rou­tine­ly con­vey life­like sound and mov­ing pic­tures of our­selves across the world, prac­ti­cal­ly in­stan­ta­neous­ly, with cheap hand­held gad­gets that at the same time present us with life­like sound and mov­ing pic­tures of oth­ers from across the world. For ref­er­ence, the first per­ma­nent pho­tographs weren’t tak­en un­til the 1820’sThe ear­li­est sur­viv­ing one:

View from the Win­dow at Le Gras (1827)
and elec­tric teleg­ra­phy was bare­ly on the hori­zon, be­com­ing com­mer­cial­ly vi­able only in the late 1830’s.

Giv­en this, I don’t think there’s go­ing to be a good enough ar­gu­ment for think­ing that the chance of AGI with­in 200 years must be 1% or less. Maybe some philo­soph­i­cal po­si­tion you hold is in­com­pat­i­ble with AGI. But even if it’s a rea­son­able po­si­tion, I doubt that it would be rea­son­able to be so con­fi­dent in it that your an­swer to the first ques­tion is some very small like­li­hood.

And I think the an­swer to the sec­ond ques­tion is “A very small like­li­hood is enough”. 1% is enough, I think, and 10% cer­tain­ly is, let alone my 30% low­ball. I wouldn’t dis­miss an­swers as low as 0.001% as be­ing too low for these ques­tions to mat­ter, ei­ther. Why? First, for the sake of fu­ture hu­mans. One doesn’t have to think a dooms­day sce­nario is like­ly to think that the ex­pect­ed im­pact of AGI on hu­man­i­ty and its fu­ture is enor­mous. Sec­ond, for the sake of the AI, be­cause if we bum­ble thought­less­ly into mak­ing agents with at least as great a moral sta­tus as our own, we risk com­mit­ting unimag­in­able moral hor­rors.Eric Schwitzgebel dis­cussed an in­ter­est­ing dilem­ma about this over at The Splin­tered Mind last month.

How high do the risk of these things have to be for us to take them se­ri­ous­ly? How like­ly do they have to be for it to be a good idea to have at least some peo­ple think­ing hard about them now and start­ing to cre­ate in­sti­tu­tions to de­lib­er­ate about and at­tempt to re­duce these risks? Very low. Well be­low 1 in 100.

Why would we need to start work­ing on this now? Why not wait un­til AGI is clear­ly close? At least in that case we won’t waste any ef­fort or re­sources if this turns out not to hap­pen.

Be­cause phi­los­o­phy takes a long time, as does in­ter­na­tion­al in­sti­tu­tion build­ing. 200 years isn’t too long, it may well be not long enough, and we may well not have even that long. We should get go­ing.

***

Noth­ing I’m say­ing here is new. Points like these have been made by many peo­ple, time and time again. But for rea­sons I won’t spec­u­late about here, they have not sunk in wide­ly, and those who hold po­si­tions like mine are of­ten mis­in­ter­pret­ed.

So to be clear, here are some things I’m not say­ing. I’m not say­ing AGI will def­i­nite­ly or even prob­a­bly ap­pear in the next 10 years, or the next 100 years, or ever. I’m not say­ing there would def­i­nite­ly or even prob­a­bly be an in­tel­li­gence ex­plo­sion once there is AGI. I’m not say­ing it would def­i­nite­ly be a dis­as­ter if AGI ap­pears be­fore there’s been a lot of rel­e­vant care­ful thought and in­sti­tu­tion build­ing. I’m not say­ing (does any­one?), as Crane sug­gests some of his op­po­nents think, that we need to solve moral is­sues sur­round­ing AGI in or­der to ad­dress the use of ma­chine learn­ing in health­care or fi­nance. I’m not say­ing that we should not be wor­ried or not have peo­ple work­ing on the near­er term prob­lems.

I’m say­ing there’s a sig­nif­i­cant enough chance that there will be AGI with­in 200 years for us to take se­ri­ous­ly the mo­men­tous but cur­rent­ly un­clear eth­i­cal im­pli­ca­tions this would have. The eth­i­cal is­sues to do with AGI are among the real AI ethics ques­tions, not a dis­trac­tion from them. I hope at least some of the peo­ple get­ting all those jobs will be work­ing on them.

Send comments to mike.deigan@rutgers.edu.