Wanna see Aaron (almost) cry?

Don't worry A-ron. I studied AI in depth when in grad school. I created a lot of AI: expert systems, fuzzy logic systems, etc. I even wrote a fuzzy Prolog compiler, which I really should have published a paper on... but I digress.

AI will never get there. The main reason is that because AI can't imagine. AI can't do what humans do: present itself as an actor on an imaginary stage and act out possibilities that might happen if things were changed (a bit or a lot), and then judge those outcomes to see if they are good or bad. AI is very very literal - it has to follow the rules it's been given and is not capable of figuring out new ones. And if it does manage to create a rule, it does not have the moral judgement of determining if the rule will have a good or bad outcome, or even determine if the actual outcome is good or bad when it applies that rule. It has to be taught rules at each step, judgement at each step, because it can't imagine.

I present this argument whenever someone goes on about self-driving cars. Nope, sorry. You can never program the AI to handle every situation out there, and the AI does not have the ability to figure out possible outcomes to actions it could take in a situation that it has not been exposed to, and does not have the ability to judge good-ness of those outcomes. Nor does it have the ability to judge, or even come up with, possibilities when the information it's getting is imperfect (the famous case when the AI in a self-driving car ran over a person riding a bike home with groceries balanced on the handlebars. It was night, the light was low, and the AI thought the person was a moose. It could not imagine otherwise, could not come up with other possibilities. So it took the wrong action, because it was not able to determine, or judge, any other outcome. And as a result, someone is now dead. wow - that's depressing... I digress again).

There are other arguments (for example: Moore's law will eventually stop when mosfets in logic chips reach the atomic level, and as a result processor power, and AI power, will eventually plateau). But the upshot is - don't worry. AI will not get there.
 
It touched me also, but not so much about AI and music. Possibly we can end up using AI to assist the creative process, such as here is a melody and a four part harmony, give me some variations.

What touched me was with regard to things that make us human and our ability to connect. The trenches mentioned reminded me instantly of an old man I knew when I was very young who had lied that he was older than he was to go and participate in that war. The result was he had lost both of his legs.

That in turn reminded me of various other dates and memories, which possibly to AI would not have a relationship, but are meaningful to me.
 
The corporate overlords will try to use it to eliminate us prols. It's a dismal tide. Creating music makes us human.
 
so one, why are we worrying now? Haven't we've been saying the same thing once the keyboard/synthesizer was invented? In the end you could see A.I. created music still as a see as human created. Someone has to give input to the A.I.

two, I don't have so much of a problem with A.I. music as an addition, but once that stops humans from doing it, if A.I. does it better. I guess here we should look for similarities in history. Let's say there are awesome stiching machines, that do it better than a human would, but there is still a huge stitchery community.
 
Don't worry A-ron. I studied AI in depth when in grad school. I created a lot of AI: expert systems, fuzzy logic systems, etc. I even wrote a fuzzy Prolog compiler, which I really should have published a paper on... but I digress.

AI will never get there. The main reason is that because AI can't imagine. AI can't do what humans do: present itself as an actor on an imaginary stage and act out possibilities that might happen if things were changed (a bit or a lot), and then judge those outcomes to see if they are good or bad. AI is very very literal - it has to follow the rules it's been given and is not capable of figuring out new ones. And if it does manage to create a rule, it does not have the moral judgement of determining if the rule will have a good or bad outcome, or even determine if the actual outcome is good or bad when it applies that rule. It has to be taught rules at each step, judgement at each step, because it can't imagine.

I present this argument whenever someone goes on about self-driving cars. Nope, sorry. You can never program the AI to handle every situation out there, and the AI does not have the ability to figure out possible outcomes to actions it could take in a situation that it has not been exposed to, and does not have the ability to judge good-ness of those outcomes. Nor does it have the ability to judge, or even come up with, possibilities when the information it's getting is imperfect (the famous case when the AI in a self-driving car ran over a person riding a bike home with groceries balanced on the handlebars. It was night, the light was low, and the AI thought the person was a moose. It could not imagine otherwise, could not come up with other possibilities. So it took the wrong action, because it was not able to determine, or judge, any other outcome. And as a result, someone is now dead. wow - that's depressing... I digress again).

There are other arguments (for example: Moore's law will eventually stop when mosfets in logic chips reach the atomic level, and as a result processor power, and AI power, will eventually plateau). But the upshot is - don't worry. AI will not get there.

I am in some ways comforted by what you share here, Trevor, inasmuch as it reflects your respect for artistry and creativity and imagination; but I think one of the things Aaron fears is a genuine reality - and one that is impacting many writers already. And that is this: The MBA's who make decisions about what will and will not be marketed often do not value imagination and creativity the way you do. What they value is what they can measure and make money off of. And often, what they can make money on is "good enough," or, in other words, crap that can be generated by a machine more cheaply than by a human being.
 
I am in some ways comforted by what you share here, Trevor, inasmuch as it reflects your respect for artistry and creativity and imagination; but I think one of the things Aaron fears is a genuine reality - and one that is impacting many writers already. And that is this: The MBA's who make decisions about what will and will not be marketed often do not value imagination and creativity the way you do. What they value is what they can measure and make money off of. And often, what they can make money on is "good enough," or, in other words, crap that can be generated by a machine more cheaply than by a human being.

It is indeed already happening in the writing world....big time. Ever read something online, and at some point your brain starts to itch and you get the feeling that something isn't quite right? Happens to me a couple times a week.
 
It is indeed already happening in the writing world....big time. Ever read something online, and at some point your brain starts to itch and you get the feeling that something isn't quite right? Happens to me a couple times a week.
Like in the movie Video Drome, humans are going to evolve new organs in response to and to process all the coming technology............:oops:
 
Back
Top