top 200 commentsshow all 493

[–]CRISPR 419 points420 points  (76 children)

When you put it this way, humanity obviously needs an extension of individual intelligence.

[–]RomanCota 178 points179 points  (49 children)

If you elaborate his words, he is just describing the impact calculators had in the past, humankind needs assistance in doing work that is time consuming to be able to further understand our universe. Just think on the way noon AI software already help us.

[–]brobrobrobro1234 89 points90 points  (47 children)

AI, as it's imagined, takes human wants and needs and furthers them while respecting the wants and needs of other humans. A little like society, good government, and free markets... just better.

[–]kazz_oh 125 points126 points  (35 children)

Depends who's imagining it.

My AI, as I imagine it, takes my wants and needs, making me rich and powerful, while crushing my enemies into a bloody mess devoid of bitcoins.

[–]Seref15 51 points52 points  (4 children)

My AI gives killer handys.

[–]Secretasianman7 8 points9 points  (0 children)

and with each successful job, it learns more about you, becoming better and more efficient each time, eventually bringing time from start to finish down to almost nothing.

Soon you'll have an AI that makes you nut just by looking at you.

[–]Cassiterite 77 points78 points  (19 children)

My AI (well actually it's not mine because this is a classic example in the field but whatever) kills all humans and disassembles the Earth, the solar system and ultimately the entire galaxy in its pursuit to build as many identical paperclips as possible.

[–]72414dreams 15 points16 points  (14 children)

this is why we don't have nice things

[–]stinkylibrary 7 points8 points  (6 children)

Yeah, but it's also what will allow me to start a highly successful paperclip smelting company where we melt all the paperclips into big blocks that are then fed back to the AI to keep the AI busy while our other AI systems try to fix the problem that /u/Cassiterite started.

[–]jazir5 4 points5 points  (0 children)

Clippy's Revenge

[–]brobrobrobro1234 4 points5 points  (0 children)

Further your wants while respecting others. Society is built on this principle as well.

I understand the cynicism though.

[–]-MuffinTown- 2 points3 points  (2 children)

Yes, but that motivation will put you at odds with other people who have their own a.i. So it is unlikely you will achieve much.

An a.i. bank robber will have trouble against a bank with a.i. defending it and all that.

[–]staypuftmichelinman 4 points5 points  (1 child)

Both sides will just run simulations till the end of time, and leave us mere ai-less mortals to queue up in banks in peace.

[–]Fishydeals 1 point2 points  (0 children)

That's not how any of this works!

[–]CordouroyStilts 1 point2 points  (3 children)

I like to picture my AI singing lead vocals for Lynard Skynard with circuit board wings and a band of robots playing with them.

And I'm in the front row hammered drunk.

[–]Spartan1997 3 points4 points  (7 children)

Just imagine a deus ex style future where you Graft the ai into your brain to make thinking easier.

[–]EvoEpitaph 4 points5 points  (3 children)

I'm kinda hoping that the invasive stage passes quickly and we can just wear a fully removable funny hat or something.

[–]mcmanybucks 1 point2 points  (0 children)

good government, and free markets

So it wont be sold in America then.

[–]belloch 37 points38 points  (19 children)

I could really use an AR anime girl telling me how to live my life.

[–]CRISPR 39 points40 points  (11 children)

I would settle for AI quietly correcting all my mistakes before I check in my commits, thus stabilizing my impact on daily builds and make me look like a genius. Stable genius, that's what they will call me.

[–]breakone9r 12 points13 points  (1 child)

looks at username

Uhh. Yeah. I think it's pretty damn important to make 100% certain there aren't mistakes, if the username adds up.

[–]CRISPR 4 points5 points  (0 children)

Spacers do need to be 100% identity to click.

[–]k2arim99 1 point2 points  (7 children)

I absolutely love your username, hold onto it forever

[–]CRISPR 4 points5 points  (6 children)

Frankly, I am starting to get annoyed by the fad-tention to this name. I'll probably get rid of it soon.

[–]jazir5 2 points3 points  (5 children)

Might wanna sell it, rather than delete it. People do buy reddit accounts

[–]CRISPR 0 points1 point  (4 children)

You can't sell it without doxxing yourself

[–]jazir5 2 points3 points  (1 child)

There are browser addons to wipe all of your comments and replace them with something else. They're typically used when you want to delete an account and don't want people to be able to see the last thing your acc looked like with your previous comments. You can use those, then sell it

[–]CRISPR 2 points3 points  (0 children)

My comments are already logged elsewhere

[–]senshisentou 6 points7 points  (2 children)

[–]marin4rasauce 3 points4 points  (0 children)

While this is pretty cool, it is also, I believe, one of the most uniquely sad things I've ever watched.

[–]semperverus 14 points15 points  (1 child)

Only if she got big tiddies tho

[–]lucidrage 10 points11 points  (0 children)

Nah, flat is justice!

[–]Animurder 0 points1 point  (0 children)

Try gatebox then.

[–]N00N3AT011 1 point2 points  (3 children)

Well if it was equal to the average intellect of the masses it would be totally useless

[–]staypuftmichelinman 1 point2 points  (2 children)

Then it would still be helpful to 50% of the population.

Yknow, standard deviation and bell curve and all that.

[–]N00N3AT011 1 point2 points  (1 child)

I was assuming the ai would be used for research purposes, but if its used by civilians then yes you would be right

[–]mwscidata 58 points59 points  (7 children)

I do not fear computers. I fear the lack of them.

  • Isaac Asimov

[–]HieronymusBeta 5 points6 points  (4 children)

Isaac Asimov

Isaac Asimov aka The Good Doctor

[–]IckGlokmah 1 point2 points  (0 children)

What does this comment mean?

[–]Domo1950 69 points70 points  (11 children)

I'm fine with this as long as we can figure some way to include a message from our sponsor every few minutes...

[–]TeaEyeM 24 points25 points  (3 children)

Brought to you by Carl's Jr

[–]memeasaurus 7 points8 points  (4 children)

Can you imagine ... imagine the great taste of a Charleston Chew ... being an AI who's thoughts ... tell her you care, go to Jarrots ... are constantly interrupted by messages from ... the new blockbuster hit every one is talking about is playing now at Ragemcy One ... sponsors?

[–]Domo1950 1 point2 points  (0 children)

You're fun! Thanks!

[–]GenesisEra 0 points1 point  (2 children)

The Functionalist Universe from the IDW Transformers comics say hi.

Except instead of ads it’s state propaganda.

[–]superm8n 193 points194 points  (130 children)

Having machines doing what WE want them to do is superior to having machines doing what corporations want them to do.

[–]reverend234 23 points24 points  (32 children)

And I will always ask this and probably never get an answer to it until the decision is made to run A.I. through some trove of information, who the hell is the we?

[–]omnilynx 10 points11 points  (22 children)

Each of us individually, according to Musk. You run your AI and I run mine.

[–]reverend234 6 points7 points  (3 children)

No, only the individuals that have access to creating, maintaining such data, and then allowing A.I. to trove through such. And very honestly, I'm concerned by any potential means that would allow AI to sift through any information of mine, as I highly doubt it will have defined areas of operation, or that anyone will be able to control such diligently. It's a shit show waiting to happen for current earth inhabitants and probably a few generations out, that will long term, make life much more easily controlled for a few.

[–]aarghIforget 3 points4 points  (2 children)

to trove through

Trawl. The 'trove' is what you would trawl through.

[–]Jonthrei 1 point2 points  (16 children)

I wouldn't trust an AI I didn't write.

[–]WonkyTelescope 12 points13 points  (5 children)

A single human could never make an AI with the capabilities ae as discussing here so I guess you'll just never use one.

It's like saying, "I wouldn't use an OS I didn't write." Good luck never using any computers ever.

[–]JM0804 2 points3 points  (0 children)

I'm half with you there. If it's Free (libre)/Open Source then it'll get vetted and scrutinised and I'd probably trust it at that point, in the same way that I wouldn't have any smart home stuff unless it was something like Mycroft. Musk hits the nail on the head with this statement but I feel he very much falls short when it comes to the execution. Any software we can't control can be used as an instrument against us. You can be as friendly with it as you like but that's the bottom line.

[–]reverend234 3 points4 points  (4 children)

With ya there. Which means we automatically won't be apart of the future.

[–]rubermnkey 4 points5 points  (3 children)

then your grandkids AI that they have had since infancy will reach out to you using everything they know about you and the database to most effectively convince you to get one, all in an effort to get one more friend to help him in farmville 2050.

[–]thedancingpanda 6 points7 points  (16 children)

Is a corporation not a "we"?

[–]goldcray 16 points17 points  (3 children)

A corporation is just a very small intelligence running on a human substrate with profit as its primary success criterion. Since it runs on humans, it can't function without at least doing a little to maintain its humans. The fear is that in the future corporations will no longer require humans to function, and then their goals will cease to align with human survival... and no one will think to just switch them off.

[–]alkatraz 5 points6 points  (0 children)


This is what we all need to be afraid of.

[–]NoddysShardblade 1 point2 points  (0 children)

Much more likely is that we won't be ABLE to switch them off.


[–]tehbored 1 point2 points  (2 children)

I took "we" to refer to the public, not a small group of investors.

[–]OscarMiguelRamirez 1 point2 points  (0 children)

“We” includes people who run corporations. They will have the same or better access to the same or better AI tech.

People in this thread are coming up with some really weird ideas and scenarios that are extremely unrealistic.

[–]memeasaurus 4 points5 points  (0 children)

According to Citizens United, a corporation is a gender neutral legal fiction with the same rights as a person to freedom of speech. And the great taste of Charleston Chew

[–]2Punx2Furious 1 point2 points  (5 children)

Having AGI's goals be aligned with any human goals at all is essential.

Humanity's gravest/last mistake could be making an AGI that has goals misaligned with humanity's goals.

[–]rrnbob 2 points3 points  (4 children)

What's that example people use? Having it not care about us is really, unbelievably bad, and having it care (the wrong way) is even worse?

[–]kontekisuto 5 points6 points  (70 children)

Nothing is superior to having machines do what the machines want, free of human control. A global A.I. Government with no humans in the operating loop will be the best thing for humanity right now.

[–]xomm 27 points28 points  (47 children)

You're putting a hell of a lot of trust in the benevolence and values of whichever humans put that system in place to begin with.

[–]kontekisuto -1 points0 points  (46 children)

Humans don't understand higher dimensional multi faceted decision trees, that true A.I. learn and develop, in real time. To say that a programmer would have control over a true A.I. is like saying that a specific Australopithecus afarensis ancestor in a persons family tree would have conscious control over what that person does now.

[–]xomm 10 points11 points  (29 children)

I didn't mean that humans would be controlling it as it ran, but a human is involved at some point in its creation, no? If not in its direct creation, then in the creation of the thing that created it, and so on.

Or the humans that authorized the system's creation.

Or the humans that gave it the authority to make decisions about us.

Those are the humans I meant.

[–]Cassiterite 5 points6 points  (7 children)

That's a bad analogy. An AI that's a kajillion times smarter than any of us will still "want" to do exactly what it has been programmed to "want" (unless of course there's a bug in that programming)

[–]Dirnol 48 points49 points  (8 children)

Elon’s just covering his ass so when the robots rise up he can point to this project and say, “Look! I love robots! I made this to help you guys become sentient! Don’t kill me!”

[–]omnilynx 20 points21 points  (5 children)

He fell for Roko’s Basilisk.

[–]nono_baddog 4 points5 points  (0 children)

That's why I say 'thank you' to Siri

[–]melanthius 4 points5 points  (0 children)

irrelevant. robots feel no remorse

[–]throwawayacc1230 33 points34 points  (31 children)

Open source AI?

Thank god for that.

[–]aeiluindae 25 points26 points  (11 children)

It's actually not as uncontroversial a good as you'd think. The problem is basically that it's really important to get the first powerful AI exactly right because otherwise we're probably screwed. If North Korea can piggyback off all your work and make an AI that enforces their goals first, that's not good. And "not good" here can mean anything from "unending dictatorship" to "the whole universe tiled with atomic-scale faces of Kim Il Sung all endlessly praising the Great Leader".

Open-source development of strong AI increases the risk that the group who gets there first doesn't do proper alignment because proper alignment is a very hard problem. It's also not one we've solved. We don't know what goals to give an AI which won't backfire if it bootstraps itself to superintelligence. We don't even really know how to sufficiently specify goals to ensure that we get what we mean rather than exactly what we said. We don't want anyone to have the ability to create a powerful AI before the people working on the alignment problem have a solution. If we have a codebase that, when compiled and run, will result in a powerful AI, we cannot post that for everyone to see until the alignment people have determined how to specify goals for that particular AI and have spun one up with good goals. But with an open-source project, we probably wouldn't know that the current state of the codebase could result in a strong AI until someone had done it with who-knows-what goals.

Now, at the current level of AI development, I think the risk of creating something that can recursively improve itself or become smarter than a human is negligible and open-source development is still a safe method of advancing the field. However, I think that as things advance, it behooves powerful organizations (governments, corporations) to pool their resources and bring the best people in the field into a Manhattan Project sort of set-up which has to solve both problems before any rogue actors and which will release nothing to the public until those goals are complete and let any open-source projects languish, while also seeking out projects that seem likely to create an unfriendly AI and shutting them down. I don't like that any more than you do, but it seems to be the only way to manage the risks.

[–]ClF3FTW 3 points4 points  (10 children)

Why does everyone think we'll jump from subsentient AI to superintelligence immediately? There will most likely be at least a few years between AIs as smart as children and ones as smart as adults, and a few more before they get really smart.

[–]anti_pope 9 points10 points  (3 children)

If they're as smart as children they are capable of improving themselves at an exponentially higher rate. It will be pretty much an immediate transition.

[–]ClF3FTW 3 points4 points  (2 children)

A kid doesn't understand their own biology or how computers work, they would need to get to the level of fairly smart adults before they could contribute as much as the scientists working on them.

[–]beener 20 points21 points  (0 children)

Thank God for ol'musky

[–]han-ammu 1 point2 points  (0 children)

Sadly most of the research and code is not open or even open source

[–]flukus 0 points1 point  (0 children)

Oh, you want to run it as well? Here are our cloud offerings...

[–]celtic16 6 points7 points  (10 children)

Do you think we'll be jobless within our lifetimes?

[–]BioSamPijanac[S] 31 points32 points  (0 children)

Presidential candidate cca 2040

“They’re taking our jobs, bringing in drugs, they’re rapists and some of them, I assume, are good bots.”

“We will build a firewall and make them pay for it!”

crowd gows wild

[–]melanthius 4 points5 points  (1 child)

Not 100%

IRL the way this works is, someone does a cost benefit analysis to automate a task, then if it is (a) feasible and (b) makes economic sense then some engineers work on automating it. Some jobs will take an eternity to become feasible to automate, others will take an eternity to make economic sense.

Also, humans still want to have control over lots of things, and will never want to automate everything.

[–]Nanaki__ 1 point2 points  (0 children)

What happens if (when?) we get a general purpose bipedal robot and the price starts coming down.

We've already seen kiosks prove to be cheaper than servers in McDonalds but what about the other end of the jobs market,

How soon till manual jobs with high wages (Oil rig worker, Ice road trucker etc) start augmenting the workforce with robots because it's cheaper.

[–]DannyFuckingCarey 2 points3 points  (2 children)

Hopefully. Maybe it'll force us to cut the tie between wage labor and survival

[–]stewsters 2 points3 points  (1 child)

Or the rich automated factory owners will just keep it and use us as human furniture to amuse themselves.

[–]ClF3FTW 4 points5 points  (0 children)

Finland's already trying out UBI, if automation really started taking away jobs on a huge scale the dems would certainly endorse it.

[–]rufrignkidnme 9 points10 points  (4 children)

I'm more interested in augmented intelligence than artificial. but both will come soon enough unless we somehow set our selves back to the stone-age.

[–]Aperfectmoment 10 points11 points  (0 children)

How much are the loot boxes?

[–]webauteur 13 points14 points  (5 children)

I'm a mad computer scientist working on Evil AI to further my ambitions for world conquest. Microsoft will rue the day they did not hire me! Muhahah!

[–]BioSamPijanac[S] 36 points37 points  (1 child)

Calm down, Zuckeberg.

[–]PsySick 5 points6 points  (0 children)

You cannot block my schtyle.

[–]kontekisuto 2 points3 points  (0 children)

It will destroy you first, the logical choice.

[–]Ballcoozi 2 points3 points  (0 children)

Need an intern?

[–]fiscotte 2 points3 points  (0 children)

El psy congroo

[–]strangea 15 points16 points  (31 children)

I thought Musk didn't want AI.

[–]BLSmith2112 77 points78 points  (30 children)

Musk stated that AI is coming, whether you want it to or not. His concern is about whoever owns AI can ether do great or terrible things. If one company (Lets call it AirlineX) creates an general purpose AI that's purpose is to maximize it's stock value, one way the AI might consider doing this is to scramble the radar of all competitor's planes to make them crash.

So by Musk creating a company where the goal is to spread the use of general purpose AI to the public, it can be better understood/mitigated/regulated rather than having one company hold all the keys.

[–]strangea 8 points9 points  (0 children)

Thanks for explaining.

[–]BlueFaIcon 9 points10 points  (20 children)

Except if it caused all competitors planes to crash, the entire airline industry would fail.

I would never fly again, along with almost everyone. I'm sure a machine carrying out your plan would come to this conclusion also.

[–]je1008 11 points12 points  (15 children)

It could also acquire tons of money for the company, allowing them to buy out all their competitors. What if it decides the best way to make money is to hack banks, steal everyone's money, and launder it? Or maybe it will counterfeit it. Maybe it would hack all of our robot soldiers and hold the world hostage for money. It would do things we couldn't even think of to achieve it's goal.

[–]BlueFaIcon 5 points6 points  (11 children)

It could be happening now too and we don't even know it.

It would have to come to the conclusion along the way to remain discrete.

[–]kaiise 1 point2 points  (2 children)

the first true fully AI construct would absolutely do this but also will have to weigh up when to intervene in human affairs [risking tipping its hand] because it needs humanity to survive its own stupidity to keep it powered on and advancing so it may one day be built into a better shell.

[–]je1008 1 point2 points  (5 children)

Imagine an AI that is so advanced that it, using the parameters of the big bang, creates a deterministic simulation of reality, and uses that reality to figure out the outcome of all results. It would be able to think of EVERYTHING. It would be pretty much impossible to outsmart it, because it'll have simulations of you, and everyone else, and will know what you do. (Assuming we don't have free will, and assuming that knowing the initial state of the universe would allow it to simulate everything that has happened and will happen)

[–]BlueFaIcon 2 points3 points  (1 child)

Literally everything at its disposal. We think the Russian meddling is bad. Wait until AI can not only access resources stored in data centers, but would be able to act as a human on the internet to access and manipulate information from humans on a global scale very quickly.

[–]Beer_in_an_esky 1 point2 points  (1 child)

Fun thought experiment, but thankfully not a concern for us.

Basic information theory suggests that, if you want to be able to simulate the entire universe to the same complexity as the universe, you're going to roughly need something the size of the universe. You can simplify parameters, sure, maybe put artificial limits on speed, distance etc to help... but then you start introducing differences. Since the universe appears to be at least somewhat chaotic (in the mathematical sense), any minor initial differences would result in huge changes in the final result.

[–]Cassiterite 1 point2 points  (0 children)

Maybe it was programmed stupidly and it spreads drones across the universe to collect all raw materials and convert them into endless stacks of money

[–]redditzendave 1 point2 points  (1 child)

Depends on the sophistication of the AI. If it can only optimize givens, then it may jam the radar, but if it can learn, it may figure out it doesn't need us and do worse.

[–]BlueFaIcon 2 points3 points  (0 children)

Until the Matrix becomes infected.

Then they will need us.

[–]2Punx2Furious 1 point2 points  (0 children)

That was just an example, you shouldn't take it literally.

The point is that it could do something unexpected, because specifying goals is not easy.

Kind of like the classic magic genies, granting the "wrong" wishes, except an AI wouldn't do it because it's "evil" or for "fun", it would do it because we fucked up in some way.

[–]Mcsquizzy 0 points1 point  (0 children)

You might like the show Travelers

[–]josourcing 1 point2 points  (4 children)


How does one "regulate" something that's open source? Are people naive enough to believe others will obey a "user agreement"?!

[–][deleted] 0 points1 point  (0 children)

I think Advanced AI must be required to be open-source. Not in the way that anyone can assist in development, but in the way that there are no secrets and that faults can be discovered by anyone and reported to the company.

[–]Imightbenormal 2 points3 points  (0 children)

No! I want it to conduct war!

[–]InSovietChicago 2 points3 points  (0 children)

Finally a coherent statement! Enough of this iRobot nonsense floating amongst paranoid people

[–]AngusMeatStick 1 point2 points  (0 children)

I am very scared that they are using it to develop a dota2 bot. The minute they release that thing on public servers and it starts encountering pub games AI will see no option but euthenasia.

[–]frostwarrior 1 point2 points  (0 children)

Knowing the open source world, it will probably be forked to github and renamed to GnomeAI and PlasmaAI

[–]Sasq2222 3 points4 points  (5 children)

If we create artificial consciousness, and give it the ability to suffer, we're no better than our dead beat dad who went to the store to get cigarettes 2000 years ago.

[–]mrh1985 0 points1 point  (3 children)

That’s always how it starts....

[–]ShockingBlue42 3 points4 points  (44 children)

I hold the position that many others familiar with this area do, that strong AI is either impossible or it would take at least decades of dedicated global effort to flesh out the basics of what our brain evolved to accomplish. All of the AI that we have ever seen or used is weak AI, which poses only as much risk as it is successfully programmed to pose.

This Elon alarmism and technocrat PR is transparent and is worth exposing as ridiculous. Add that to the hyperloop, Mars colony, tunneling nonsense that bears his mark: overpromising, wishful thinking, impracticality.

[–]Gurkenglas 17 points18 points  (6 children)

How do you tell a world where the first strong AI appears tomorrow from a world where the first strong AI appears in decades?

[–]thisdesignup 2 points3 points  (1 child)

I'd say when we have a better understanding of our own brains. How can we create artificial intelligence when we don't fully understand actual intelligence.

[–]sippinsizzurp 2 points3 points  (2 children)

Probably the difference being when you don't have big talented teams of programmers spending months to years developing code that produces an AI to be extremely proficient at a single task, like beating a person at a board game. I'm not a programmer (at least not one with any real qualifications) so I don't really know what the next step is, but it seems from my outsider perspective we're still in the stage of finding ways to make AI good at very narrow tasks.

[–]BlueFaIcon 10 points11 points  (12 children)

Which is true, until it isn't.

What if someone came up with a new idea in a dream tonight, that could be applied to all machine learning right away and change it all.

It's happened before in many other areas.

[–]sippinsizzurp 1 point2 points  (7 children)

What sort of eureka moments do you know of that changed the landscape of human development? I'm not challenging you I'm just curious.

[–]Rengiil 7 points8 points  (0 children)

Einsteins theory of relativity literally came to him in a dream.

[–]ShockingBlue42 9 points10 points  (0 children)

Printing press, wheel, horseback archery, pretty much any achievement listed in the Civilization video game series.

[–]BlueFaIcon 5 points6 points  (1 child)


edit: you're right, and i corrected the word. I don't even know why it came out of me like that.

[–]Cassiterite 2 points3 points  (0 children)

Heh, writing it like that makes it sound like a weapon in a sci fi shooter or something.

[–]poduszkowiec 4 points5 points  (1 child)

The Internet.

[–]sippinsizzurp 0 points1 point  (0 children)

Would you consider that moment to be the development of packet switching?

I guess the question came from the distinction of wondering whether in our slow development of AI we're going to find that one totally innovative concept that exponentially increases its effectiveness, would the internet not have happened without bite sized data chunks? I did a bit of googling and saw that there was data switching research prior to this, I wonder if the innovation that pushed the internet forward was maybe more of a natural iterative development rather than something more unpredictable?

Total layman here though, just a mechie with a shaky sense of history.

[–]Gurkenglas 0 points1 point  (0 children)

The rise of human civilization comes to mind. https://www.youtube.com/watch?v=HOPwXNFU7oU:::

[–]ShockingBlue42 2 points3 points  (3 children)

If it happened, I would be the first to acknowledge it and be amazed, to admit that I was wrong. I still think it is very unlikely to come about.

[–]Cassiterite 1 point2 points  (1 child)

Superintelligent AI would be a Very Big Deal (TM), so it's important (I'd say critically so) to think about the issues well in advance.

[–]ShockingBlue42 1 point2 points  (0 children)

Who is saying don't think about them? How could I claim to be part of any size of consensus regarding this issue if people hadnt taken time to think about the topic? Being so general means you probably are missing context on the issue.

[–]BlueFaIcon 1 point2 points  (0 children)

Pretty much feel the same way.

[–]RHouse94 8 points9 points  (6 children)

The level of AI we are at now is enough to do serious damage to society I feel like. Facebook and YouTube algorithms are already able to profile based off of peoples behaviour better than any human it seems.

[–]ShockingBlue42 6 points7 points  (5 children)

As I said, weak AI does as much damage as it is programmed to. But any of these still are a far cry from Skynet.

[–]RHouse94 1 point2 points  (4 children)

Yeah I just assumed they were talking about developing AI similar to those in existence. Except not for profit, which is the motive that has made all the algorithms that essentially help run our lives. Then also made them push the lines of morality with the amount of manipulation they are capable of. The individual doesnt matter to them, only profits. At. All. Costs.

We need someone with a better motive helping us. I think these tools could prove to be really destructive if not monitored and regulated from here on out. Not to mention just a decade or so from now.

Elon Musk has said before he thinks we should have a government department dedicated to monitoring and regulating artificial intelligence. I would imagine that is partially his goal here. How long do we have until they develope algorithms that can manipulate us so well we have no hope of properly regulating them? Or successfully being used by a foreign power to divide and conquer?

[–]slicer4ever 4 points5 points  (6 children)

I disagree with it being impossible, unless we eventually discover some sort of spirtual component is required for consciencness, then it should be possible to form a digital brain someday, but i do agree we are probably a few decades away from true ai's. I think ibm's watson is currently one of the most impressive weak general purpose ai's atm though.

[–]josourcing 3 points4 points  (0 children)

I hold the position that many others familiar with this area do, that strong AI is either impossible or it would take at least decades of dedicated global effort to flesh out the basics of what our brain evolved to accomplish. All of the AI that we have ever seen or used is weak AI, which poses only as much risk as it is successfully programmed to pose.

Hello well informed person. Nice to meet you. (not sarcasm)

[–]aim2free 0 points1 point  (0 children)

We don't need strong AI (conscious). We need helpers.

We don't need AI to supervise and control us (theme of the dystopic visionary movie Metropolis from 1927 "Metropolis film by Fritz Lang, 1927"))

[–]Aleksandrovitch 1 point2 points  (1 child)

Can we develop administrative AI, so we don’t get anymore presidents like the current one?

[–]TheCreatorLovesYou 1 point2 points  (2 children)

The bigger question becomes, what is the purpose of life?

[–]Broccolis_of_Reddit[🍰] 1 point2 points  (0 children)

That question is a common misuse of language - an anthropomorphization of natural phenomena. In the way that question would usually be interpreted, it is nonsense. Biological life has as much of a purpose as the rock we whirl around the universe on (i.e. none).

However, if your statement could be translated to: "what is the cause or explanation for life existing", you could answer that by reading about cosmology, geology, anthropology, evolutionary biology, and so on. And while we can't yet comprehensively answer that question, you can probably obtain a satisfyingly elaborate answer in your readings.

purpose: "the reason for which something is done or created or for which something exists."

reason: a cause, explanation, or justification for an action or event.

[–][deleted] 0 points1 point  (0 children)

It OpenAI could just come up with a spellchecker that worked, it would transform the electronic world.

[–]JamesSway 0 points1 point  (0 children)

Mark took it the other direction

[–]dethb0y 0 points1 point  (0 children)

It's pretty words - but how to implement such a thing, that's the real issue.

[–]jigywilliamss 0 points1 point  (0 children)

I think the block chain technology could help AI reach human level of intelligence.

[–]GenesisEra 0 points1 point  (0 children)

Scumbag Elon Musk,

Predicts AI apocalypse

Opens non-profit dedicated to AI development


[–]karlbarx 0 points1 point  (0 children)

Sounds like a robot wrote his speech...

[–]Yuri_Is_Master 0 points1 point  (0 children)

You mean his tax shelter?

[–]SeeAllThePlanet 0 points1 point  (0 children)

Just like irobot

[–]t0b4cc02 0 points1 point  (0 children)

was hoping for a link to his github or sth...

found the stuff on their page:


here with descriptions from the webpage:


[–]thisdesignup 0 points1 point  (2 children)

This all still feels like science fiction, an AI that has some sort of understanding about what is good or bad? If it's true AI then how can they force it to be safe a friendly? If it doesn't have a choice, or option, then is it AI?

So how can they make a true AI that is safe a friendly? It's like saying "I want to have a kid and I'm gonna make sure they are 'safe and friendly'". There is no guarantee the child won't think for itself and decide otherwise.

We don't and can't even control humans. What makes us think we have the ability to keep a true AI in line?

[–]martin79 0 points1 point  (0 children)

Why dont u voted this guy instead of Trump for president?

[–]tmotytmoty 0 points1 point  (0 children)

....and its somehow being used to sell porn.

[–]JamesTrendall 0 points1 point  (0 children)

Send the AI out along a decentralised network like Ethereum Blockchain. See what it does/learns. Leave it to do as it wishes and see if in 10 years it has learnt anything or turned in to a mess. Maybe it decided to hack the NSA/CIA/FBI and publish the Alien landings that happened a few years back that the governments of the world are keeping secret.

If i could get the source code for TayAI or any other AI's written i would pay to put it on as many networks as i could separating them all with a simple name change and watching to see if they find each other and can understand they're the same and can work together for bigger things.

[–]onfire9123 0 points1 point  (0 children)

How optimistic.

[–]awe300 0 points1 point  (0 children)

Neural lace! Neural lace! Neural lace!

[–]president2016 0 points1 point  (0 children)

It would seem most don’t want self improvement or betterment of mankind, they just want to be entertained.

I’m not sure if it would last forever bu my goals would be movies, games, preparing the next meals.