Logo Utrecht University

Responsible Intelligent Systems

Blog

Yet another misguided prediction about AI

New Scientist reports on a survey held among AI researchers who, according to the magazine, predict that around 2060 AI will outperform humans on every task. Elon Musk added on twitter that he thinks this prediction is not accurate: according to him this is already going to happen around 2030/40. I do not believe in these predictions. I think there will certainly be tasks left that humans will do better. To me it seems obvious that computers of the kind we have now will actually never be capable of writing good poems, doing philosophy, giving psychological advise, mourning, showing compassion, understanding mathematics, to name a few things. I think it is interesting that people who do believe that someday AI will surpass us in anything, have no other argument than that AI is constantly getting better at tasks. What many people do not seem to realise is that AI is getting better mostly because computers are getting faster and more data is available. But they are still the same kind of limited and deterministic machines that in my eyes are unlikely to be up to every task performed by humans. Computers will get better and better. But, AI will not get better in the same pace; at a certain point it will bump into the boundaries determined by the computational concepts and theories our current machines are based on.

2 Responses to “Yet another misguided prediction about AI”

  1. Jan Broersen

    I think you might think that I think that we are not machines. But that all depends one what you mean by ‘machines’. I think we are not Turing Machines. But I believe we might be other machines of a yet to be defined type (biological, quantum, hyper-Turing, whatever). When I was talking about the boundaries of our current computational machines, I meant Turing machines.

    I like it that you mention the burden of proof. You seem to suggest that the burden of proof is with me: that I have to prove that we are not Turing machines. More in general you say that the burden of proof is with the one claiming that something is impossible. That is not accurate I think. The burden of proof is with anyone making a claim. So you just as much have to proof that it is possible. We both have a claim, and they happen to be opposites. But we both have the burden of proof. Proof we both do not have. But I see a lot of common sense evidence and philosophical arguments on my side of the claim, and not so much on yours.

    So yes, I am interested in your arguments. You are very welcome to write a bachelor thesis with me.

  2. Toon Alfrink

    I respectfully disagree.
    Allow me to distill your argument:
    A) People that warn us of AGI base their argument solely on a projection of current progress
    B) This progress mostly comes from hardware improvement, which is bounded by hard limits
    C) therefore (inherent) hard limits will keep AGI at bay (for a long time/indefinitely).
    Now this argument (like any) relies on a few assumptions. Here’s an attempt to list a few:
    – there is nothing AGI “doomsayers” know that I don’t
    – the current paradigm will stay dominant for the coming decades
    – hardware, being deterministic, has hard limits
    – these hard limits will be below human functioning
    (if I’m misrepresenting your opinion, by all means let me know and I’ll try to understand you better)
    Now I could poke a few holes on this, for example the fact that paradigms come unexpectedly or that a brain is a vivid example of hardware that meets the human level. I could present various arguments for AGI that don’t cite current progress at all (let me know if you’re interested), but I think our real disagreement is on a deeper level.
    It’s the acknowledgement of unknown unknowns. The awareness that there are possibilities we are missing. That our current understanding doesn’t even resemble what incredible insights lie ahead of us.
    I believe that, when it comes to claiming that something is impossible, the burden of proof is on the claimant. I’m sure you’re aware of the black swan story.
    When I disagree with someone, I like to assume a >50% probability that I’m wrong, which means that there is a >50% chance that I have something to learn from you. Or you from me. If you’re interested, by all means send me an email.
    As a side note: that AI will never get good at compassion is not obvious to me at all.

Leave a Reply

Your email address will not be published. Required fields are marked *