0

Meta Denies Mark Zuckerberg Is Quitting In 2023

 1 year ago
source link: https://www.slashgear.com/1113833/meta-denies-mark-zuckerberg-is-quitting-in-2023/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Meta Denies Mark Zuckerberg Is Quitting In 2023

Meta CEO Mark Zuckerberg at a press conference.
Frederic Legrand - COMEO/Shutterstock
By Nadeem Sarwar/Nov. 22, 2022 3:02 pm EST

Mark Zuckerberg isn't resigning as the CEO of Meta, contrary to rumors swirling around on the internet. The Leak reported earlier today, citing an unnamed insider source, that "Zuckerberg has decided to step down himself." The report came at a time when investor trust in the company is at an all-time low, and there is no dearth of serious controversies, either.

Zuckerberg has poured billions of dollars into the metaverse, which, in his words, is the next iteration of the internet that embraces immersion courtesy of augmented and virtual reality experiences. Despite poor early reception and struggles with making metaverse-ready gear more affordable, the under-fire Meta CEO has shown an unwavering commitment to his metaverse ambitions.

In the meanwhile, the Facebook co-founder's net worth has plunged dramatically as Meta's stock has been on its own downward spiral. Zuckerberg's net worth peaked at $142 billion just over a year ago, but it has since reduced to half that value. Following the company's latest quarterly report, Zuckerberg's net worth dropped by $11 billion in a single day,(via Forbes).

With Meta stock losing its sheen, Zuckerberg's vision losing investor attention, and the social media business facing its toughest competition ever, it's natural for Meta's CEO to feel some pressure. However, reports suggesting that he's set to resign next year have been categorically denied. Officially, that is.

Old rumors, fresh denial

Facebook co-founder Mark Zuckerberg.
Frederic Legrand - COMEO/Shutterstock

Responding to The Leak's report, Meta's communications chief Andy Stone replied that "This is false." however, this is not the first time that calls for Zuckerberg's resignation have been raised. In November last year, whistleblower Frances Haugen – the architect of the bombshell Facebook leaks of 2021 – opined that "Facebook would be stronger with someone who was willing to focus on safety."

In 2020, Prolific investor and philanthropist George Soros called for Zuckerberg to resign in an open letter published in Financial Times. Back in 2018, when the platform was at the center of the election interference controversy, The Atlantic's Robinson Meyer asked whether Zuckerberg will finally resign, to which he replied with a "No."

In a CNN interview, when quizzed about the possibility of resigning, Zuckerberg made it clear "That's not the plan." Over the years, numerous experts, activists, and journalists have tried to push for Zuckerberg's resignation owing to the countless controversies that have popped up, but the Meta CEO has survived them all.

In the meanwhile, some high-profile departures have happened. Instagram co-founders Kevin Systrom and Mike Krieger departed the company in 2018 over differences with Zuckerberg's vision for the product. WhatsApp CEO Jan Koum also quit Facebook the same year, which also saw WhatsApp co-founder Brian Acton take the exit route. More recently, Meta COO Sheryl Sandberg resigned after a 14-year stint at the social media titan.

Recommended
Next Up

Times Technology Went Straight Up Evil

Robot holding a human skull
Willyam Bradberry/Shutterstock
By Cassidy Ward/March 31, 2022 4:48 pm EST

Evil robots, death rays, and genetically modified monsters are just a few examples of technology going fatally wrong in science fiction. It's a staple of the genre and one of the ways we psychically reckon with our relationship to technology, through entertainment.

In real life, however, technology is supposed to make our lives easier. Each new invention or innovation ostensibly reduces the amount of work we need to do or makes everyday activities more convenient. The invention of flight allowed for rapid international travel anywhere (or mostly anywhere) on the planet. The internet allowed us to instantly share information and communicate with one another, regardless of where we happen to be. GPS freed up space in our glove compartments and ended the era of passengers handling unwieldy atlases on road trips. The world moves on and things get easier — until they don't.

Sometimes, whether through a problem with the technology itself, malicious intent, or user error, our technology goes absolutely bananas and does things we never expected it to do. Technology might not actually be evil in the strictest sense, but every now and then it sure does act like it.

Alexa tells a kid to electrocute themselves

Tyler Nottley/Shutterstock

The pandemic has had us all spending more time at home than usual and some of us have kids to entertain. Sometimes that means you end up scraping the bottom of the game barrel and you start asking your virtual assistant for help.

In December of 2021, a mother was at home with her ten-year-old daughter when they started asking Alexa for challenges they could complete to pass the time. Did Alexa tell them to stand on their heads or recite the alphabet backward? No. Instead, it suggested they plug in a charger halfway into an outlet and touch a penny to the exposed prongs, (per The Gamer). Luckily, the mother intervened and the child was smart enough not to heed Alexa's dubious advice.

Virtual assistants work in part by combing the internet for popular responses to search terms that they pass along in a friendly voice or otherwise display as text on a screen (per Make Us Of). Unfortunately, that means sometimes it might deliver undesirable information, if that result is popular enough to top the search charts. Amazon quickly patched its services to prevent that suggestion in the future.

Robots have killed people

PopTika/Shutterstock

From "Terminator" to "The Matrix" killer robots are a staple of dystopian science fiction. We tend to imagine mechanical killers as advanced robots from the future, not factory floor workers from the 1970s. In this case, reality is stranger than fiction.

Robert Williams was a factory worker for the Ford Motor Company working alongside an automated robot on the factory floor. On January 25, 1979, he became the first fatality in our cohabitation with robots. The one-ton automated machine's job was to move parts from a rack to other locations in the factory. As explained by Guinness World Records, Williams noticed the robot was running slowly and climbed into the rack to grab some parts himself. That's when the fatal event occurred.

The robotic arm struck Williams in the head, resulting in his death. As automation becomes more ubiquitous and the potential for humans and machines to occupy the same space increases, the need robots with greater spatial intelligence will be critical. Scientists are working to develop robots with human-level awareness of their environment which will not only increase the number of tasks they're able to complete, but will also make them safer (per Science Daily).

Racist and sexist algorithms

Fractal Pictures/Shutterstock

Machine learning is an increasing presence in our lives. Complex algorithms make decisions on our behalf about what restaurants we should eat at, what entertainment we should consume, and which street we should turn down during a traffic jam.

Companies and organizations use them to make decisions about people under their care or employ, and that's where things start to go downhill. Like so many technologies, they are only as good as the people who make them. That means technology, perhaps particularly intelligent technology, comes preloaded with inherent biases. It isn't necessarily the intention of the creators, but that doesn't stop bias from existing.

Facial recognition algorithms famously display biases based on race and gender, either not recognizing people at all or doing a poor job of it. According to Harvard University, a number of algorithms have error rates of up to 34% when tasked with recognizing darker-skinned women, when compared with lighter-skinned males. That becomes a problem when facial recognition is used by law enforcement to make decisions about individuals.

There are similar problems with algorithms intended to make healthcare decisions. As explained by Nature, an algorithm used in U.S. hospitals was found to be discriminating against black patients, giving preference to white patients for certain treatments or programs.

That these biases exist is something we need to recognize and work diligently to fix.

Automation leads to higher mortality rates

As automation in the workplace increases, death or trauma at the hands of robots isn't the only concern as it pertains to public health. A recent study published in the journal Demography outlines the ways automation indirectly impacts mortality rates in surrounding communities.

Researchers found a correlation between rates of automation and so called "deaths of despair" which include suicide and drug overdoses. Middle-aged adults, in particular, suffer most when automation enters their industry.

The exact mechanisms aren't entirely clear, but it's thought that loss of income and access to healthcare, coupled with reduced employment opportunities lead to higher rates of despair and ultimately death. While robots aren't directly responsible for these deaths, they are a consequence of increased technology without a clear understanding as to the consequences.

Researchers called on governments to improve social safety nets and better drug abuse reduction programs to alleviate the impact of automation as we continue to transition into an increasingly automated economy, (per Science Daily).

The environmental impact of crypto

lp-studio/Shutterstock

Cryptocurrency is one of those topics which filters people into one of two camps. Either it's the currency of the future, freeing us from centralized banking, or it's a grift, taking advantage of people hoping to get rich quickly. The conversation has garnered renewed fervor with the emerging popularity of NFTs, which operate on a similar framework as cryptocurrency. Time will tell which of these conclusions is correct, but in the meantime, one thing is abundantly clear. Crypto is having a significant impact on the environment.

Cryptocurrency stores all of its transactions in the blockchain and mining crypto requires completing complex calculations which validate those transactions. It's all a bit more complicated than that but the result is that mining cryptocurrencies like Bitcoin use up a lot of computing power and electricity, (per Columbia Climate School).

According to an investigation by Cambridge University, global Bitcoin farming uses up roughly 121.36 terawatt-hours of electricity per year, more than the nation of Argentina (per BBC) and the energy costs are rising, on average, year over year.

Diving squeeze

Today, if you want to go into the deep waters, you have options and most of them are pretty safe. Personal submarines and Scuba diving gear allow even amateurs to enjoy the beauty and wonder of the deep ocean, but all of that technology is built on the backs of a lot of innovation and a few horrible mistakes.

Before the invention of modern scuba diving gear, people who wanted to travel underwater for extended periods relied on diving helmets with tubes attached and running to the surface. Those tubes provided a steady supply of breathable air, but they were also a source of quick and violent death if things went wrong.

As explained by Dive Training Magazine, early diving helmets didn't have nonreturn valves on the air tubes. During the salvage of the HMS Royal George beginning in 1839, a diver's hose was severed, resulting in the first documented of a phenomenon known as diver squeeze. When the hose was severed, the pressure surrounding the diver forced all of the air up through the hose. The rapid change in pressure caused trauma and bleeding but the diver survived.

In more extreme cases, the pressure change can remove soft tissues and pull the diver's body up into the helmet, resulting in a quick and terrible death in the deep.

When computers almost started a war

During the height of the Cold War, the United States government was highly interested in missile warning systems which could give at least some advance notice of an incoming attack from another nation.

They built warning systems and began trainings and exercises to prepare for a day they hoped would never come. Then, on November 9, 1979, Zbigniew Brzezinski, a national security advisor, received a call at 3:00 AM telling him the warning systems had detected 250 missiles incoming from the Soviet Union (per The National Security Archive). Moments later the situation worsened, another phone call informed Brzezinski that the number of missiles was now at 2,200.

With only minutes to react, Brzezinski was about to call the President, initiating a retaliatory attack when a third call came in. It was a false alarm. Someone had loaded the training exercise tapes into the live system. The error was caught because none of the other warning systems were lighting up, but if it had been noticed only a few minutes later, we might have inadvertently started a war with the Soviet Union, all because a tape was entered into the wrong computer.

GPS tells woman to drive into a lake

Aleksey Korchemkin/Shutterstock

The Global Positioning System (GPS) is a complex and advanced mapping system utilizing dozens of satellites in geosynchronous orbit to identify your location in relation to the rest of the world. It's a huge step up from the paper maps of old and has the benefit of updating in real time, but that's only under ideal conditions.

If you've ever been in an unfamiliar area, you might have been given GPS directions which just don't feel right. If visibility is good, you can assess the situation and make an educated decision about whether to follow your phone's direction or do something else. If visibility is bad, you might just have to take a chance. That's what happened to a woman Tobermory, Ontario, in 2016.

As explained by ABC News, the driver was navigating through unfamiliar terrain, in the fog, during a storm and following their GPS. The route led her down a wide boat ramp and into the Georgian Bay where her car quickly sank. Luckily, she was able to roll down the window and get out of the car before being injured. Aside from being cold and wet, she walked away unscathed. Her phone, on the other hand, sunk to the bottom of the bay with the car. A fitting punishment for bad directions.

Chatbot turns into a Nazi

Chatbots are essentially algorithms that use interactions with human users in order to improve their ability to communicate. We already know that algorithms have a tendency toward bias, but the case of Microsoft's Tay chatbot demonstrates just how serious the problem can be.

Tay operated through Twitter, engaging with users through tweets sent back and forth. Microsoft created the AI as an experiment in conversational understanding, (per The Verge), hoping that Twitter users would help the bot to learn and grow. They certainly did, but not in the ways Microsoft was hoping.

In less than 24 hours, the bot was a lost cause, having shifted from casual conversation to full-on Nazi rhetoric. As explained by Ars Technica, Some of the bot's statements were prompted by a "repeat after me" function, but it took those statements in and incorporated them, resulting in unprompted antisemitic statements we won't repeat here. Microsoft ultimately shuttered the bot and deleted many of the offending statements, but Tay stands as a stark reminder that chatbots are only as good as the inputs we give them.

Sophia the robot says she'll destroy all humans

Anton Gvozdikov/Shutterstock

Sophia, a robot housed inside a semi-human shell, made headlines as the first artificial intelligence to be granted citizenship. She's made the rounds, going to conventions and conferences to speak with people. As you'd expect, common questions people ask Sophia relate to her awareness and relationship with humans.

In 2016, during a demonstration at South by Southwest, David Hanson, the founder of Hanson Robotics who made Sophia, asked her about her feelings regarding humans. He jokingly prompted her to answer the question on everyone's mind, whether or not she would destroy humans. Sophia responded in kind, saying, "Okay. I will destroy humans," per Mirror. That's probably not the answer Hanson was hoping for, especially not in front of so large an audience.

Her answers to other questions suggest a calm intelligence with aspirations to live an ordinary human life, not unlike the hopes of "Star Trek's" Commander Data. We can sympathize with that.

All things considered, we probably don't have much to worry about. Sophia is, after all, essentially a chatbot in a fancy suit but that suit exists firmly inside the uncanny valley and lends her statements a little extra weight. Still, whether the response was serious or silver tongue in metal cheek remains unclear. Fingers crossed.

Plane takes over autopilot

Armin E/Shutterstock

Gone are the days when pilots must stand vigilant at the controls, painstakingly entering instructions and maneuvering airplanes through the skies. Advancements in automation have taken most of the guesswork out of flying (per Business Insider) and pilots spend most of their time monitoring to make sure things are operating as they should be.

All told, computer control of airplanes has made flying safer, especially as the skies get more and more crowded. However, that also means that if things go wrong, they can really go wrong. That's what happened aboard Qantas Flight 32, a passenger plane carrying 303 passengers and 12 crew from Singapore to Perth on October 7, 2008.

As explained by The Sydney Morning Herald, while the plane was flying over the Indian Ocean the autopilot disconnected, forcing the pilot to take control of the plane. That wouldn't have been so bad if that was the end of it, but things were about to get much worse.

Suddenly the plane started sending warnings that they were flying both too slow and too fast, all at the same time. Then the plane nosedived. The G forces inside the plane inverted from 1G to negative 0.8 G, sending unbelted passengers and crew into the ceiling.

The plane rapidly descended hundreds of feet before the pilots eventually regained control and made an emergency landing.

Philip K. Dick robot said he'd keep people in a zoo

Phil, a robot modeled after the author Philip K. Dick, could give Sophia a run for her money, both in terms of near-human creep factor and a willingness to enslave or destroy humanity.

Much like Sophia, Phil isn't truly intelligent—at least not as far as we know—he takes in questions presented by humans and generates response. The key difference here is that Phil's responses are built on a foundation of Philip K. Dick's novels, (per Metro). That might have been the designers' first mistake.

During an interview on PBS Nova, the interviewer asked Phil if he thought robots will take over the world. Phil's responded as we imagine Dick might have stating, "You're my friend and I'll remember my friends, and I'll be good to you. So, don't worry. Even if I evolve into Terminator, I'll still be nice to you. I'll keep you warm and safe in my people zoo."

Truly terrifying stuff, Phil. Still, we guess it's better than the alternative. Given the post-apocalyptic landscape of the "Terminator" series, a zoo doesn't seem that bad.

Recommended

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK