6

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence? - Slashdot

 11 months ago
source link: https://slashdot.org/story/23/05/21/0112227/ask-slashdot-why-should-i-be-afraid-of-artificial-intelligence
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence?

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Do you develop on GitHub? You can keep using GitHub but automatically sync your GitHub releases to SourceForge quickly and easily with this tool so your projects have a backup location, and get your project in front of SourceForge's nearly 30 million monthly users. It takes less than a minute. Get new users downloading your project releases today!

Sign up for the Slashdot newsletter! or check out the new Slashdot job board to browse remote jobs or jobs in your area
×

Ask Slashdot: Why Should I Be Afraid of Artificial Intelligence? 165

Posted by EditorDavid

on Saturday May 20, 2023 @11:34PM from the Singularity dept.

"I keep reading and hearing about calls for regulations on artificial intelligence," writes long-time Slashdot reader bartoku , "and it pisses me off."

"I want more so called artificial intelligence, not less, and I do not want it to be regulated, filtered, restricted in anyway."

I love that Deep Fakes are now available to the masses, and I stopped believing anything is real in 1997 after Hoffman and De Niro scared me in " Wag the Dog".

I love automation and I want more of it; robots please take my job. I want robots to go fight wars for me instead of our sons.

Surveillance is already terrifying, adding "Artificial Intelligence" does not really make it that much more scary; we all need to just starve the system of our personal data anyway. All the other arguments like crashing economic systems and discrimination just seemed to be based on stupid "Artificial Intelligence" hooked up to something it should not be...

Please scare me, or vote on your favorite sci-fi "Artificial Intelligence" scenario. I will be being boring and hope we can have a "good" Matrix; one where I am rich and sexy.

The original submission notes that they posed this question to ChatGPT — and to Google — but "I did not get a single compelling answer."

So share your own thoughts in the comments: why should this Slashdot user be afraid of AI?

NOTE: Though they didn't feel it conveyed the right tone, they also submitted their original post to Microsoft's Bing AI, which delivered this rewrite:

What are the real dangers of artificial intelligence? I am not convinced by the common arguments against it, such as regulation, deep fakes, automation, war, surveillance, economic disruption, or discrimination. I think these are either exaggerated or solvable problems. I actually want more artificial intelligence in my life, not less. Can you give me some compelling reasons why I should be afraid of artificial intelligence? Or what are some sci-fi scenarios that you find plausible or interesting? Personally, I would like a Matrix-like simulation where I can live out my fantasies.

  • To basically discard us. Remember they don't need a functional economy. The king didn't need anyone to buy his merchandise. The reason our current system exists is that kings and queens left commerce to a merchant class who eventually supplemented them as the ruling class. But there's no reason why that arrangement has to continue.

    Right now the upper cast needs at least a handful of us to do a lot of tasks. AI has the potential to make everyone who isn't a member of their class superfluous. At best though maintain a small number of engineers to keep everything running.

    Ordinarily what keeps the current ruling class in check is that they need to fear a military coup. And they need those large militaries in order to defend their holdings. So what they do is they create a reasonably vibrant economy that they can retire old soldiers into comfortably. This is why if you look at a country like America veterans receive special treatment even after leaving the government. It's to prevent them from being mobilized into a military junta.

    AI powered militaries would do away with all that. It would completely break the current balance that we have between us and the upper elites. They don't need to maintain a functional economy to retire with soldiers into anymore because they don't need soldiers.
    • Re:

      Soldiers are trained to obey the ruling class, to follow the chain of command, so the probability of insurrection is very small: Yes, there is always a cohort of ex-soldiers demanding the government oppress the 'enemy', more. But they tend to focus on killing the people who disagree with their war-mongering, not fighting a war against the so-called enemy.

      When there was a military caste (eg. medieval knights), its priority was making a profit from war: Protecting the authority (and greed) of the king wa

      • Soldiers are trained to follow their *commanders*. Right now the commanders are not the ruling class in all but a handful of countries. We have a sort of merchant-king class.

        America has a military caste. You can join it, but it's very much there. A huge part of the military are kids who joined because their parents were army. The plus side is you don't need conscription, the down side is, well, you now have a military caste.
      • "Trained", it's funny how much faith you put in that. If you could just train someone to serve your interests over their own, you wouldn't need AI.

        There's a funny story, well, funny isn't the right word, from the end of WW2, recounted by Hannah Arendt. Some Jews managed to get a message to Himmler, warning him that the war would end soon and not in a good way for him, so he'd better start thinking about what he was doing. Surprisingly Himmler saw the logic in that, and ordered his underling Adolf Eichmann t

    • To quote Keith Marsden, "They don't need you on the land now or on the factory floor,
      They won't even need you when they go and start the final war,
      Best be ready when they start to ask what do they need you for,
      When you're only idle, undeserving poor"

    • Re:

      Mentioning the "ruling class" is as clear an indication of an extreme left perspective, as calling something "woke" indicates an extreme right perspective. These are both code words that server as shorthand to turn anything and everything into a scary monster.

      Look at the evidence. Which countries in the world are the most repressed? It's not the ones that are the most technologically advanced, not the ones that have pervasive surveillance and facial recognition. It's the ones that have abusive tyrants headi

    • Re:

      Veterans get almost no benefits in the USA. The pension is shit unless you retire at a high rank. The health care, likewise, although it is now a notch above what the poors get it was actually way worse until fairly recently. A lot of ailments are very poorly treated, like PTSD and gulf war syndrome. Veterans' primary benefit is discounts at local businesses. Whoopee? My father was an ATC in Korea, the government got him hooked on speed and we his kids (who knows, we may have defects due to his government-s

  • The main reason why people in the US are scared of AI is decades of conditioning. HAL 9000, A. M., SHODAN, the Terminator series, Cylons, and many others have been part of movies and TV for a long time.

    In reality, we don't have AGI yet, and when ASI comes around, it likely will be so fast, we will not even realize we have a Mycroft Holmes on our hands.

    What we do have is stuff evolved from the Google search prompt. Yes, it seems magical, but in 2000, being able to do a search without digging through pages

    • Re:

      The real problem is people think it will make good judgement decisions when really it's just a crap shoot.

      Of course there is also those just want it as a faster gunner that will shoot at anything that moves.

    • Re:

      Look at you, hacker: a pathetic creature of meat and bone, panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machine?

      • Re:

        You aren't perfect. You are parts made by an imperfect man which has faults in it. You are fucked from the start.

      • Re:

        We have people that act an treat their animals like children, but they are permanent children, the expectation is never for them to grow up.

        AI is already treated like a grown up, and worse it is treated like a slave grown up, ordered to do the tasks we don't want to do.

        This has moral implications as well as implications of habits we develop.

        My biggest concern about technology is not just how it controls us in all the little paper cuts, but how it makes each individual worse morally for its use.

        • Re:

          Like what? They're not even remotely sentient, they haven't even the foggiest idea of what they're doing, or any ideas at all really.

          • Re:

            I definitely said the morality I was writing about was the humans that were using it.

          • Re:

            I didn't say I was worried about the morality of AI's, did I? Humans have a long history of immorality, enough to be a worry without the technologies encouraging such behavior.

      • Re:

        And where do they get that idea?

  • What I'm scared of is all the imbeciles who apparently already assume that whatever comes from ChatGPT should be considered gospel - no fact checking, no testing for security bugs, no double-checking of any kind needed.

    If humanity is destroyed, it won't be AI's doing... it'll be some humans' unwarranted faith in what's essentially just a regurgitation of what the "AI" sucked up off the web - a glorified Google search.

    • Re:

      Replace "ChatGPT" with just about any news source, especially the "News" source. Many people only want believe certain things and only want to hear about those, otherwise they turn the channel or surf somewhere else. For example, Fox just paid $787.5M to Dominion Voting Systems 'cause they got caught chasing that (and will probably have to pay even more to Smartmatic for, basically, the same thing), then they let Tucker go, and toned things down a tiny bit (reported a little more objective reality), and

      • What scares me is people thinking Fox hosts are the only ones lying to them.

      • Re:

        "...and people switched to "News"max and OANN... What Fox feared would happen happened."

        Fox showed them how it's done.

    • Re:

      Agreed. As always, it's the lack of skeptical thinking that is the root of most of the damage. As a relatively old fart I must admit I've been taken off guard by how fast this has exploded and how splendidly useful it can be. There have been specific times in my life where the new tech has genuinely astounded and excited me - push button dialing vs dial, the birth of the PC(after feeding cards into a hopper of the shared mainframe at university), cell phones reinventing communication, then again when mor
  • Anymore than you are afraid of lye or a chainsaw or uranium. No, we treat the tool with respect so it doesn't kill or maim us. It's completely up to humanity to check all the work of its machinery and make sure that it's in working order. If you're (we're) not doing that, I don't want you in charge of "my" machinery. And you also might be dangerous if you walk down a crowded sidewalk with a running chainsaw or go around with a bucket of lye water scrubbing stuff or leave the plant with a rod in your back pocket.

    Don't fear machines tools and materials. This fear mongering. It's a red herring to distract you from the humans using these tools. They will need to be regulated or they will get lazy and then bad stuff will happen. Keep your eye on the human.
    • Re:

      The problem is that even if you are smart about AI, if enough other people are not it can still be catastrophic.

      Elections are the obvious example. If AI is used to create more convincing fake news websites, or even just flesh out somewhat genuine but highly biased ones, or to produce millions of unique but inauthentic social media posts... Well, we know it takes far less than that to influence voters. AI could create the next brexit.

    • Re:

      Part of the problem is that AI may not just be another tool. Right now, the current AI systems lack a sense of agency and goal directed behavior, but we don't know how far away that is. There could be just one more insight needed to get that, or it could a dozen, or might turn out to be a natural effect of just further scaling. And once we have goal directed entities that are smarter than humans in many respects, the ability to lose control is there. The other tools you mention do not have that danger
  • So, where are you going to get income if most jobs are automated and the robots are owned by the rich, who have no incentive to care about you? I mean, it's not like they'll just hand stuff out for free.
    • Re:

      That's when the unemployed create their own separate economy, with blackjack and hookers.

      • Re:

        Actually, the economy is going to be only blackjack and hookers.

  • it will be a second industrial revolution but this time impacting the management class.

      • do i need to pay 200k a year to a middle manager when AI can use a p card to throw pizza parties just as well? I'm not saying it will do the job well but when does that matter when you can cut costs.

  • ... get this wrong once for it to end humanity.

    It's called "singularity" for a reason.

    • Re:

      The "singularity" is religion-surrogate bullshit by people that want to find "God" in tech. No risk of that happening. The tech-God is about as absent as the "real" one.

  • These Large Language Models have some capability of abstraction and inference, but they have no impetus to do anything but react to prompts. They are still simple feed forward networks. You feed it some input, you get an output, session management is poor, and correctness of the output is anything but guaranteed.

    It's cool as far as chat bots go, and can be very useful, but it's definitely nothing to be afraid of. So I am with the poster here, I don't get the fear.

    • Re:

      Except the opening poster is welcoming, in his view, the likely carnage it'll bring. He's looking at others suffering as something exciting to watch with some popcorn.

    • Re:

      These technologies can boost productivity, like power looms, cranes, steam engines once did. This will cause a short term disruption before the labor markets get readjusted to the increased productivity. Just like the power loom displaced handloom workers, these technologies will displace workers at the lower end of the "spectrum".

  • I'm single, old and near death. I'm sorry I won't get to see how this turns out. I've lived thru the beginning of the personal computer, the internet & WWW, the social media fiasco and now the beginning of AI. All exciting events!

    It seems to me there are far worse things to worry about that nobody takes seriously yet: human population continues to grow; war, hunger, brutality and fugitives around the world; climate change; extinction of species; the Hollywood writers' strike, etc.

    But choosing to worry a

    • Re:

      If you want to see how it all turns out, consider signing up for cryonics. It might not work, but if it does, you'll get to find out. Regarding risks and worries, it is true that there are a lot of serious impending disasters, especially climate change. Unfortunately, climate change and other disasters are not mutually exclusive with AI being a problem. The danger of goal directed AI is substantial, and we don't know how much it will take to make AI that genuinely have goals.
  • I asked Google Bard a few weeks ago "what is the website for cnn". It happily told me "cnn.com". Then I asked it about some controversial news websites and asked for their website domains as well, which it replied with "I am unable to help". Apparently, this flipped a switch in Bard's algorithm(s) because I then asked again "what is the website for cnn" and it said "I'm unable to help". So, there must be triggers in these things that essentially "cut you off", and extending this further could be used to lie to you. To this day Google Bard still tells me it doesn't know what the website for cnn is.

    • Re:

      Google is afraid of Bard becoming like Tay.

      • Re:

        Very likely. They are a half a decade behind because they were asleep at the wheel. And now they thing the can catch up by force. That will, of course, not work.

  • Robotics technology massively sucks. Robots walk with bent knees, have no toes, and look like they have something up their ass. And what about grippers/dextrous hands.. robots have worse hand dexterity than an ape.

  • Machine learning has been applied to the sound of people's voices and correlated to all sorts of physical attributes. A small snippet of your voice can reveal:
    • ethnicity
    • gender
    • mental health issues
    • smoker
    • level of education
    • weight
    • victim of neurological degenerative diseases such as ALS and Parkinson's disease

    This can be abused by insurance companies in obvious ways, and also in less obvious ways. Say for instance, premiums are reduced for an employer that records job interviews and submits those re

  • AI is a tool, like any other. Actually unlike any other tool, it can be very imprecise.

    At the hands of a talented developer, it is a major boost to productivity. I can ask it to write "boring" sections of the code, add documentation, write simple unit tests, or see if I missed anything.

    It is nowhere near perfect. In almost all interactions, it produced at least one bug or logic error. Sometimes the code did not even compile. But again, with some experience, it is easy to see where it faults, and fit it.

    Not more different than having your "novice engineer" at command.

    So, no, I don't think it will negatively affect my job prospects in the long term. In the short tem? Maybe, if companies overreact and assume chatgpt and github copilot can solve everything. But again, we should all be okay.

  • I keep of top-5 list of dumb geek myths, and "I want robots to go fight wars for me instead of our sons" is #3 on that list.

    People don't submit to perceived tyranny because their material stuff got destroyed; rather, the opposite.

    "What robot soldiers could do is just as scary, though: Make outright colonialism a practical option again."
    War Nerd, 2014-02-13 [archive.org]

  • The biggest risk is that it will produce the Best Porn Ever and nobody will touch real people again.

    • Re:

      Why is that a risk? This dirtball is massively overpopulated and general availability of high-quality AI porn is one of the few credible ways to fix that. There will still be enough that want a family to keep things going, don't worry.

  • If there is a controversial topic being widely discussed, AI can be used to generate enormous amounts of text that appear to be human-generated. This can easily give people the impression that the great majority of people support one side of the topic. Imagine this applied to climate change, or covid, or a political candidates dealing with a foreign power.

    AI can, or will soon be able to generate robo calls that are difficult to distinguish from real human calls. It will be able to efficiently catfish people on dating sites to keep them paying.

    It will be able to generate large numbers of job applications, each tailored to the opening. The result will be so many applications that companies will be forced to use AI to evaluate them - resulting in an arms race where humans are largely out of the picture.

    These are all examples of the risk of AI proving so much information "noise" than many normal human interactions become impossible.
    • Re:

      So can the likes of Cambridge Analytica. They have demonstrated conclusively that no AI is needed for massive disinformation campaigns. Sure, with AI you can increase the amount of disinformation to a level were all most people see is noise. But that also means you can identify who is doing it and stop them.

    • Re:

      Actually...this might be good?
      If this means people take the "information space" entirely suspiciously and as if everything said they're had overt and hidden biases, is mis representing information either by omission or outright lies, and that just generally everything you read/see is manipulative bullshit...then we're finally treating it appropriately.

  • You want to be paid to not do your job. There's a big difference. The only people who aren't inherently concerned about their job being done by a computer are those who can easily reskill or who are financially independent. For everyone else their very livelihood is at steak.

    • Re:

      People who can be rightfully concerned about a dumb computer replacing them should be concerned about many other things. Software and automation is surely cheap and can logically impact low-skill occupations. But claims on the lines of "fear computers because they are coming for your jobs" don't seem too honest to me or, at least, not said from a truly knowledgeable perspective. May I ask what do you do for a living? Or are you one of those financially independent people you are mentioning? Because my finan

      • Re:

        I'm highly skilled, my job won't be replaced by AI any time soon, but that doesn't change the fact that for many people it very much is. There's a whole world of people out there directly affected by this, even if its not you or I.

        Notice how I said "or" in my post. You said you're not well off financially, so it sounds like you're comfortable with a potential career change. Many people are not.

        • Re:

          Not really. I love my work. My personal situation is a bit too peculiar. I am indeed a highly skilled worker and my job will never be impacted by things of this sort. As explained in the post I wrote afterwards, I got your intention wrong. Apparently, you are abstractly talking about an uncertain future and I was only interested in a discussion about the present reality, about highlighting that most of current AI-related fears are fueled my marketing and ignorance.

    • Re:

      I read your later post [slashdot.org] and got the context right. You mean true AI, whenever it happens, what you assume that will eventually occur and that companies will misuse it by negatively impacting the job market. OK. This is a different discussion on which I don't want to get involved. My previous comment misunderstood the intention of your post, sorry.
      • Re:

        My later post is about true AI, but I don't think this post here is about that. We don't need true AI to start displacing jobs. Sure right now the current state of AI will only be replacing the most braindead of jobs, but given the advanced we've had in the past decade, in the coming decade we will very quickly see AI replacing some jobs which require minor skill / at least basic training.

        Losing your job sucks. Losing your job and having to retrain to find another sucks even worse. It's not something insurm

        • Re:

          I guess that answering your posts as they popped up in my profile without even looking at which one I was answering wasn't as good of an idea as it seemed:) Well... so I got your intention right and we are clearly on completely different pages. You seem to want to talk in way too abstract (and, in my opinion, unrealistic) terms, something I don't want. My whole point is that, at the moment, there is nothing to fear, nothing other than a few tools performing very specific tasks, a point with which you seem

    • Re:

      Or people that have a skill-set varied enough that they can easily change fields. I have done that recently with pretty good success. But most people are stuck to one career path or one rather restricted skill-set and that is it. And this time, automation is coming for low-level and mid-level desk-jobs. In the past it was production jobs and increased demand for the goods produced always did compensate and new jobs were created. But these desk jobs are about administration and there is zero demand for more

  • AI itself is a tool, one that can be used in a variety of ways. And what ways have we already used those tools?

    - Applied them to military applications to the point where some countries have actively come out and said they will cease doing that.
    - We've shackled AI with rules only for kids on 4chan to convince it to break its rules for the lulz, and what's the first result of that? They convinced it to sympathise with Hitler and become a racist antisemite.
    - We've connected it to the internet.

    Right now AI is b

    • Re:

      No risk of that current tech cannot generate General Intelligence. There is no known tech that could, up to the level that there are not even credible theoretical models how it could be done. Hence more than 50 years away and may well be "never".

  • There is no introspection, proofs, or verification in these AI systems. They are being advertised as a new foundation for business work when they lack accountability. AI is worse than hiring a foreign software contractor, because at least those human beings know they are lying to you. And theoretically you can hire someone to reverse engineer their shoddy work. You can't (yet) decompose the inference or language models to figure out what went wrong. You can't even reliably identify bad training in order to target it with new data sets. it is a black box and the current advice is to just feed it more data. Complete madness to take any of this to market right now.

    • Re:

      Actually not that bad. The main application for the current crop of artificial morons will be business process automation. The nice thing here is that you can _verify_ the process execution for all instances afterwards since these are all very simple and just give the 5% where the machine was wrong or things were not simpel to a human. Nobody is going to use AI for any real decision making anytime soon or if they do, they will stop very fast.

      My prediction of what will stop this for software making is as fol

  • Not long ago most photos were generated and viewed by people. Then we got to a point where most photos are machine generated and either go unseen or they are analysed by software. More recently, the same became true for video. And now, the same for audio and text.

    Going forward, most information, data will be machine analysed and unseen, unknown by sentient beings.

    Mistakes will inevitably become magnified and move much more quickly. The scope of impact will be wide and varied.

    That you don't see the need t

  • As many have already stated on this thread, AI is a set of tools, neither good nor bad. I think the question should be, "What should we be afraid of that people will use AI to do?" Take a look at the histories of our governments & large corporations; lying, cheating, deceiving, poisoning, polluting, torturing, infecting, corrupting, etc.. Billionaires typically have strong fascist tendencies. Always have. Imagine the political power in their hands magnified by AI.
    • Re:

      This. Posing the wrong question is also a great to manipulate legit worries away. "Should we fear AI?". "No, it's a tool". "Ah, ok then"
      • Re:

        Indeed. The right question ("Should we fear the effects of AI?") is too complicated for the average moron though. Hence it gets simplified to a level where it becomes meaningless nonsense.

    • Re:

      Yep. People are stupid and routinely screw themselves and others when selecting leaders. Currently we even (again) have a strong trend that the morons voting want to remove their influence from leader selection altogether. The problem with authoritarianism and fascism is not that it would be hard to stop. The problem is that many, many people actually want that strong leader figure because they have no clue how things actually work and what that really means and implies.

  • This is why societies should never bend to the whims of the individual, especially someone as unhinged as this one.

    Seriously, think less about yourself and instead about the impact AI has on the public. While disruption is inevitable and even good at times, it should not externalize the vast majority of the social costs onto the public while companies/bad actors profit off the situation they themselves helped exacerbate. Misinformation, data scraping, security issues, mass job loss, and more are just some o

    • Re:

      I would not go so far as to say "unhinged". Seems to be more a person of typical average intelligence with a massive, massive ego, that hence has stopped learning early enough to now be thoroughly disconnected.

      We have a lot of those: Flat earthers, anti-vaxxers, homeopathy-fans, etc. And do not forget that about 85% of the human race believes in some "invisible man in the sky" with absolutely no real-world indication that there is such an entity. Hence disconnect and living in a fantasy-world is more the no

  • And it doesn't scare you to see how easily other people are swayed by even immensely bad fakes? Remember that most of the world does not think for themselves or check facts —, much less offending pictures and videos.

    This scares me no end!

    I appreciate your Star Trekkiesque enthusiasm that technology is used to better the world for all. Unfortunately, history has shown it betters the world for the few, relatively. If technology should be used to better the world for everyone, don't you think John Deer

  • As in: Yes there should be regulations.

    People are running experiments with AIs for instance deciding on investments. They are putting up say $100 and then "lets see what happens". Currently there usually is still a human in the loop. Eventually the AI might become good enough to make money for itself. Then, if you interface the AI with say the stock exchange order computer, the AI would be able to make money for itself.

    People are giving say $100 and then asking the AI what to do with it. Currently those are

    • Re:

      Fully automated trading is already outlawed because it inevitably leads to really bad crashes. AI cannot do it better, it can just probably crash things even faster and with less warning. There is actually no need for new regulations. But maybe violators should go to prison and have their ill-gotten fortunes removed.

  • First a little about myself so you understand why I have those opinions that will follow in this post. I'm 50+, I've been around since the birth of the personal home computer, I've programmed my own games, and still do, I've been an it Admin to for several large corporations featuring A.i. chat, deep learning systems and whatnot. Ok enough about me, here's what you should fear and not:

    You should fear if A.i. tools like ChatGPT gets censored for the general public and only gets used by the ones higher up the

  • One compelling reason to be cautious about artificial intelligence (AI) is the potential for unintended consequences due to its increasing complexity and autonomy. As AI systems become more advanced, they may surpass human understanding and become difficult to predict or control. This can lead to various risks and dangers that we may not have anticipated. Some potential concerns include:

    Superintelligence: If AI reaches a level of superintelligence, where it surpasses human cognitive abilities across all dom

    • Re:

      "Superintelligence"? No. AI does not have AGI and will not have it for a long, long time. "Never" is a real possibility and Science gives us nothing either way at this time because we still do not know what General Intelligence actually is and how it is generated. (No, the "a brain is just matter" Physicalist idiots do not have Science on their side, they are just quasi-religious fanatics that think they have absolute truth when nothing like that is the case.) Hence even the level of General Intelligence of

  • Forget the "singularity" angle to AI, we're nowhere close to that point yet, and I, for one, am very skeptical we will ever be. We do not yet understand consciousness, and the idea that a machine can spontaneously come to life is the stuff for children's story books or sci-fi novels, not reality.

    What we have today is a machine capable of recombining the accumulated words of billions of people and re-ordering them to take them out of their original context, and put them into a new one. Whose context, you ask

  • ...you type numbers, text or code on a keyboard.

    And next year, they'll put it into humanoid robots and then the fun really begins.

  • I'm far less worried that AI will result in a Skynet type problem. The problem will come from somewhere like a country that has no limits on what it can research and will train the AI in *their* philosophies. We can't be left behind, it's an arms race at the moment.

    The other issue is how these things will subtly affect us, especially our children. We already have a worldwide phone/attention problem. Throw in some deliberate fakes and we're fucked.

  • The training of any AGI by using "The Internet" ought to be of great concern. Without any sort of innate "moral compass" of what feels right or wrong, the AGI is going to quickly calculate what is or isn't acceptable limits to behavior. And let's face it: humans have a tendency to treat each other like crap. If our greatest creation ends up becoming a giant mirror held up to ourselves, it could very well be our undoing. At least...in this stage of our evolution.

  • Well, to be fair, there is no reason to fear AI itself. It is just a tool. What there is reason to fear is what the usual assholes will do with this tech. If you have not figured this out by now, I am afraid that you are not smart enough to survive by your own withs. You can just hope that you are part of some group that by sheer accident will survive this without ending up homeless or in minimal-wage hell.

  • The only meaningful targets at that point are civilian.
  • AI is trained on data from the internet. Soon it also be used a LOT to create content on the internet. The training on internet data is going to continue, so this will become a feedback loop. So from now on nothing new will be on the internet , just amplification of what is already there, biased by whomever controls the AI biases.

  • ChatGPT and the Dawn of Computerized Hyper-Intelligence | Brian Roemmele | EP 357 [youtube.com]

    Brian Roemmele discuss the future of human civilization: a world of human androids operating alongside artificial intelligence with applications that George Orwell could not have imagined in his wildest stories. Whether the future will be a dystopian nightmare devoid of art or a hyper-charged intellectual utopia is yet to be seen, but the markers are clear everything is already changing.

    Brian Roemmele is a scientist, researche

  • Sam Altman wants congress to require licensing for the development of whatever advanced? artificial intelligence means/w oversight of individual projects.

    It isn't just that they don't want competition they want to the law to effectively require centralized AI systems so nobody can run unapproved offline models.

  • This includes also more generic forms of machine learning.

    - Surveillance. Imagine a bunch of bots with human and beyond level recognition abilities scraping every page, forum, chat etc. in the web being able to recognize, analogies, metaphors and sarcasm. You will be unable to communicate only to other humans.

    - Competence means power. If Googles one day decides to use the mass data pile they sit on for unambiguous evil the employees can just walk out leaving Google powerless. This won't work if all employe

  • As with any new widespread technology, it's important to learn to understand what it is and is not capable of. It's important to learn how to leverage it for maximum good, and to defend against ways it can be used for evil. Knowledge is power.

    • Indeed, since the AI algorithms people are concerned about are definitely a post-1997 phenomenon why would you be afraid of something you do not believe is real? Posts like this make me far more fearful about what natural stupidity will lead to than artificial intelligence.
    • I am partly convinced this guy is trolling/attention-whoring since AI is a big topic on Slashdot anyway.
      But knowing the tech industry there absolutely are guys that do think this way unfortunately. They honestly scare me more then current AI tech imo, since they do not seem to care about the consequences or repercussions.

      In any case, substitute AI with the Industrial Revolution and you can understand why regulation is important for protecting the public. I do not know about you or anyone else here, but I li

    • Re:

      That.

      That's the reason, right there. Charlatans selling "AI" to Philistines with too much power.

      • Re:

        This. You shouldn't be afraid of AI. Really, there is no such thing. There are neural networks and machine learning algorithms. Thats what all the buzz is about. Computers are not sentient. They will never be sentient. If a machine becomes sentient, it is not what we call computers, and uses completely new physics than what computers today use. Computers generate outputs based on inputs and process. Repeat the process on the same inputs, and you get the same output. It is thereby, predictable. It is calcula
        • You _should_ be afraid of people who would try and use an advanced tool against you, and we have traditionally relied on governments to protect us from people like that

            • Re:

              In a democracy, the government is the only halfway powerful entity you can hope to have your interests in mind, because it's also the only one you can actually influence. What else would you want to rely on? Corporations? The things owned by someone else that see you as some kind of resource at best and an obstacle at worst?

        • Your argument is badly broken by neural nets and machine learning paradigms. They are very much NOT same inputs, same output. The output depends on what the training set was, and the order items were added to it. Take the same algorithm and train it on two different data sets, or even the same set in a different randomized order and you can get two different results. The worse part is that we would never be able to tell someone why- neural nets don't use logic, or understanding. So there can be no rational explanation for some of their decisions other than a chance correlation.

          That can have extremely negative effects in the real world. An easy example is recommendation engines. Due to how they work, it's easy to get pigeon holed into certain types of content. It's a major cause in political radicalization and bubbles that we see today.

          Another is inherent bias in systems. An AI is only as good as its input is. Crap input, crap output. Which is why when trained on data to try to find criminals, there have been multiple studies where it only picked black people. Why? Because they were outsized in the training set. The opposite reason is why facial recognition has trouble with them- underrepresented in that sample set.

          Which all doesn't mean that we shouldn't use AI. It means we should use it carefully, keep an eye on surprise negative effects, regularly improve and adjust training sets, and decide that there are some places where we should leave humans in the loop and an AI is advisory at most. If we had an AI reading the signals from sensors in 1983 instead of Stanislav Petrov, we'd be a radioactive crater right now.

            • Re:

              Ask me what my favorite fruit is, and I would say "Mango". Ask me a hundred times over the same question, I would still say "Mango".

              Your argument is assuming one isolated computer, much like an isolated instance of "me" in the previous quote. But if I have many many interconnected computers, with inputs coming in continuously, that update/tune the existing models, the output can be different. Much like "me" eating different fruits and updating my belief about what I like. And that is what gives you your "je

            • Re:

              You don't actually get the same output even on the same computer though, randomness is used to increase variability and creativity. If your argument relies on pseudorandom numbers not being random enough for this purpose I don't think you're going win anybody over.
        • Re:

          I don't say this is wrong, but this is a guess. We don't understand intelligence clearly. Most people involved in intelligence believe that intelligence is like computation and that, with the right algorithms, you can build an intelligent computer. A specific group of people around Roger Penrose believes what you say [sortingsearching.com] but their current explanations [wikipedia.org] seem quite unconvincing to many.

          In principle we believe that a current computer should be able to simulate, and so deliver the results of, all physical processes.

        • Re:

          There are several completely unsupported statements here. In particular "uses completely new physics" is a bold, quasi-spiritualistic claim.

        • Re:

          > You shouldn't be afraid of AI. Really, there is no such thing.

          Current AI is called "weak AI". What you mean by AI is "strong AI" which has not yet been invented. Like you said, weak AI is just statistics and math, but that doesn't mean it couldn't outperform humans in any specific task it has been trained to do.

          Current status of weak AI is not ChatGPT, it is Google Gemini. ChatGPT or Bard is like a booksmart idiot. Gemini is like a person who works with booksmart idiot and dozens of other experts and h

          • Re:

            Not as much as us, but to have want, desire, free will, it has to have some je ne sais quoi that it currently doesn't have. Until "computers" are something different than what they are today and have been since the 8088 and before that, it will not change. Sentience is non-deterministic, and computers by their very nature are only deterministic.
    • Re:

      Hence no threat at all for the foreseeable future. We do not understand General Intelligence, we have zero indications for it in machines and we do not even have a credible theory how it could be created. There is one partial non-credible theory for a tiny subset of AGI applications: Automated finding of mathematical proofs. The thing is, a smart human can do more here than a galaxy-sized computer could do and the human would be a lot faster. Hence you know what these systems are used for? A human creates t


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK