- Smartology Sunday Download
- Posts
- Smartology Sunday Download for 2/19/2023
Smartology Sunday Download for 2/19/2023
Catch up on this week's tech news in 5 minutes!
Being Dumb is Optional
Technology News to make you Smarter
A week's worth of tech news that takes you 5 minutes to read
Welcome to this week's Sunday Download! Our goal is to give you a week's worth of info in 5 minutes or less so you can stay informed and up to date on the latest technology news and trends. In return, all we ask is that you share it with a friend or colleague instead of keeping the Smartology goodness all to yourself 😊.
Highlights of this week's issue include:
The shake up of the EV charging market
Airlines are delicate things
Everything you need to know about AI
Total read time: 4 minutes and 52 seconds. Let's goooooo! 🚀
Elon lets you borrow his charger
The Biden administration focused its energy on revising the national EV charging network criteria to promote EV adoption as well as a "Made in America" focus. Musk is committed to helping Biden achieve his EV charging goals, even if a competitor builds a larger network. GM, Mercedes-Benz, Volvo, and Ford will deploy over 100,000 public chargers. General Motors and Francis Energy pledged the most.
Tesla will install 7,500 EV chargers by 2024, including 3,500 new and existing 250 kW "superchargers" along highway corridors and 4,000 slower "destination chargers" in hotels and restaurants in urban and rural areas.
EV drivers can use these charging stations via the Tesla app or website, but it's unclear how Tesla will adapt its charging network to new connector-type restrictions. Tesla's "Magic Dock" may enable Tesla charging stations to charge non-Tesla EVs. Adopting the CCS standard may make Tesla the market leader, but it may decrease consumer interest in Teslas.
Speaking of Tesla...they had to recall about 365,000 vehicles this week due to a glitch in their self-driving software. Read more here.
Bing-bong
This week, Microsoft revealed the new AI-powered Bing. The chatbot can perform user searches, summarize the findings, and provide additional search enhancements thanks to its tech marriage with Chat-GPT.
The demand is coming so fast that Microsoft has to limit chat sessions to five queries per session and 50 inquiries per day. Otherwise, the AI gets confused and starts spitting out wrong information. This comes after several media publications suggested that the new search engine's results may be occasionally inaccurate and that the technology was not yet ready for prime time.
Estimates are that Bing will generate an additional $2 billion in ad income based on this technology integration.
White Castle's Biometric Problems
The Illinois Supreme Court determined that White Castle must face charges that it regularly scanned the fingerprints of over 9,500 employees without their authorization, which could cost the company more than $17 billion.
The Illinois Biometric Information Privacy Act (BIPA) mandates a $1,000 fine for each infringement and a $5,000 penalty for willful or careless offenses. White Castle maintained that it could only be sued for the first time a worker's fingerprint was collected, not every time they were scanned to access a business computer system.
According to the court's decision on Friday, BIPA broadly bans "gathering" or "capturing" biometric information without authorization. White Castle was required to take workers' fingerprints every time they used the computer system.
Meta's new AI Integrator
Researchers at Meta have unveiled a new artificial intelligence language model called Toolformer. This model can leverage external tools such as search engines, calculators, and calendars. Toolformer uses APIs to interface with many other programs, allowing it to select which instrument to use in a given circumstance and how to apply it.
Toolformer is based on a GPT-J model that has been pre-trained. It has the potential to revolutionize natural language processing and give answers to fundamental problems like arithmetic and fact-checking.
What is GPT-J, you ask? Yes, it's different than the model ChatGPT uses (GPT-3). To put it in plain terms, GPT-J is better at automatically creating code, which makes API integrations more effective. For example, you can teach it how to accomplish a task, and the AI can enhance upon that task to figure out how to perform related functions on its own(ish).
Bad week for European Airlines
An IT failure at Lufthansa stranded thousands of passengers. It forced flights to Germany's busiest airport to be canceled or diverted on Wednesday, with the airline blaming botched railway engineering works that damaged broadband cables. More than 200 flights were canceled in Frankfurt, a vital international transit hub and one of Europe's biggest airports, and scores of flights were also delayed. The airline and Germany's national train operator blamed the problem on third-party engineering works on a railway line extension on Tuesday evening, when a drill cut through a Deutsche Telekom fiber optic cable bundle. As a result, German air traffic control suspended incoming flights, though these have since resumed.
Across the Baltic, SAS, a Scandinavian airline, was hit by a cyber attack Tuesday evening and urged customers to refrain from using its app. News reports said the hack paralyzed the carrier's website and leaked customer information from its app. Despite corporate assurances, customers who tried to log into the SAS app were logged onto the wrong accounts and had access to other people's personal details. Swedish companies have been heavy targets, recently, by presumed cyber attacks, including Sweden's national public television broadcaster, SVT, which was temporarily down due to a group called "Anonymous Sudan" claiming responsibility for the attack.
Speaking of being hacked....
The FBI is investigating a hack of its computer network in an isolated incident that was now contained. CNN reported the incident involved computers at its New York office, which were used to investigate child sexual exploitation. It was not immediately clear when the incident occurred, and the origin of the hack was still being probed. This is the latest in a series of high-profile U.S. government hacking incidents over the last decade.
Amazon's greed
For the first time, Amazon's average cut of each sale surpassed 50% in 2022, according to a study by Marketplace Pulse, which sampled seller transactions going back to 2016. The research firm calculated the total cost of selling on Amazon by tallying the commission on each sale, fees for warehouse storage, packing, delivery, and money spent to advertise on a site with hundreds of millions of products jostling for attention. Marketplace Pulse says sellers have been paying Amazon incrementally more per transaction for six years. Still, they could absorb the increases because the company was attracting new customers and rapidly increasing sales. Amazon sales in 2022 were flat.
Spotlight Story
What is AI and how scared do I need to be?
While we usually cover specific news stories, the following is an attempt to help educate the reader on the topics surrounding AI and doesn't relate to any particular event this week.
Before we get into it, let's establish a base: In recent months, AI systems like ChatGPT and Dall-E 2 have taken the globe by storm. These AI systems are classified as "generative," which means that these systems can be taught new things and enhance the information given. It's fed "training data" through large language models (LLMs), which are machine learning systems (or neural networks) that process terabytes of data, often fed directly from the internet.
ChatGPT has already become one of the most widely used new technology products in recent memory. Other AI programs can output photos in various styles, videos, and music.
One of the main controversial points is whether chatbots such as OpenAI's ChatGPT or Microsoft's latest Bing bot, Sydney, are or are not sentient (robot language for alive/aware). Long story short, they are not. Instead, these highly advanced programs result from complex math, programming, libraries of data, and a large amount of computational power.
The basic logic here is that there's a significant difference between information and knowledge. Information deals with facts, statistics, and details on a subject. Knowledge deals with understanding or awareness that you get from experience. The AIs only know things based on the information it's been given through neural networks.
Neural networks are essential: A neural network is a machine learning system that can be trained on massive amounts of data and allows it to spit out patterns. An LLM (such as the ones ChatGPT and Dall E-2 are based on) is an example of a neural network that gives an AI the ability to produce text and pictures, video effects, filters, and complete sceneries.
For example, Google's research arm has unveiled an early version of MusicLM, which can convert text-based cues into music samples. Other examples of neural networks (aside from LLMs ) would be networks used for facial recognition, biometric enhancements, and speech patterns.
What else can AI be used for: In addition to the above, AI is also being utilized to generate code and 3D designs for various items, including clothing and buildings. The AI bubble is fast expanding, with businesses claiming to employ AI to create everything from more spam emails for business leads (raise your hand if you've used ChatGPT to write your cold-call emails 🙋) to making entire movies. In addition, adjacent kinds of AI have transformed fields like weather forecasting and medical imaging analysis.
Yet, several of AI's shortcomings, such as grasping complicated situations, behaving in unexpected or novel ways, and interpreting various emotional inputs, might result in blunders. ChatGPT, for example, has been discovered to make many factual mistakes and inconsistencies in its replies. At the same time, picture generators struggle to count, generating images with the wrong number of fingers (which is hilarious).
AI is gaining popularity, no doubt, but there are many unknowns and ethical and legal problems. A few artists are suing some of the corporations behind AI picture generators, saying that their work was illegally taken off the web to train the systems, which entitles them to compensation. AI systems have also been accused of generating outputs steeped in racism, sexism, and other forms of prejudice since answers reflect the entire internet (the good and the bad).
There are also privacy issues and the possibility that a "poison pill" may be included in training data to alter findings.
The main point is that LLMs are rapidly changing and rely on humans for intent and inspiration even though AI programs seem exceptional at imitating human creativity...it's just really well-written code. For the time being, the best thing to do is get to know the technology and learn how they operate. This will be an essential skill for business professionals over the next ten years.
So get on the AI train, or get left behind.
Rapid Fire
Are you interested in sponsoring this newsletter? If so, send an email to [email protected] to find out more!