
Meta :There has been a surge in brazen scams involving Facebook pages impersonating businesses. While this is not a new phenomenon, a few verified Facebook pages were recently hacked and used to spread likely malware through ads approved and purchased through the platform. These accounts stand out because they impersonate Facebook itself, which should make them easy to catch. Social consultant Matt Navarra first detected and shared some of these suspicious ads on Twitter. The hacked accounts had official-sounding names such as “Meta Ads” and “Meta Ads Manager” and could share suspicious links to tens of thousands of followers, with their reach likely extending even further through paid posts.
How did this ad get approved @Meta ?
— Matt Navarra (@MattNavarra) May 4, 2023
😬
Verified account impersonating Meta tricking users into downloading shady tools pic.twitter.com/maPW6RWL3F
In another instance, a hacked verified account purporting to be “Google AI” pointed users toward unnatural links for Bard, Google’s AI chatbot. That account previously belonged to Indian singer and actress Miss Pooja before the account name was changed on April 29. That account operated for at least a decade and boasted more than 7 million followers.
And this is not an isolated case
— Matt Navarra (@MattNavarra) May 5, 2023
Here's another verified Facebook Page impersonating Meta
Yet Meta has approved it to run this scam ad pic.twitter.com/oylBvS3XPD
Facebook has introduced a new feature that publicly displays a history of name changes for verified accounts, which is a step towards transparency. However, this safeguard has yet to prevent some obvious scams from slipping through. In these recent cases, hacked pages were not only impersonating major tech companies, including Meta itself but were also able to purchase Facebook ads and distribute suspicious download links. Even though the accounts had very recently changed their names, the ads were still approved by Meta’s automated ads system. Fortunately, all of the impersonator pages identified by Navarra have since been disabled.
Meta has also released a report on a recent wave of malware scams that are AI-themed. In these scams, hackers trick Facebook, Instagram, and WhatsApp users into downloading malware by pretending to be popular AI chatbot tools like ChatGPT. One malware group, DuckTail, has been plaguing businesses on Facebook for a few years now.
According to TechCrunch’s Carly Page, Meta has reported that attackers distributing the DuckTail malware have increasingly used AI-themed lures to target businesses accessing Facebook ad accounts. DuckTail has been targeting Facebook users since 2021 and steals browser cookies, hijacks logged-in Facebook sessions, and steals information such as account details, location data, and two-factor authentication codes. The malware also enables the attacker to take control of any Facebook Business account the victim has access to. The pages impersonating Facebook and buying malware-laden ads were compromised through DuckTail or similar malware.
Meta Is Hacked
A Meta spokesperson explained that the company invests significant resources into detecting and preventing scams and hacks, but scammers constantly try to circumvent their security measures. Impersonator accounts and compromised business pages have been long-standing issue for business owners on Facebook and Instagram. To address this, Meta has launched Meta Verified, a verification program offering higher customer support for businesses. However, this “proactive account protection” isn’t free; companies must pay $14.99 monthly to access it. While some businesses may reluctantly pay the price to avoid being overwhelmed by scam accounts, it has also sparked controversy.
Meta Platforms has hired a team of highly-specialized engineers from Oslo who previously worked at British chip unicorn Graphcore until December 2022 or January 2023. The team has been building artificial intelligence networking technology and will now join Meta’s infrastructure team to support AI and machine learning at scale in the company’s data centres.
This move comes as Meta strives to improve how its data centres handle AI work. It has become increasingly crucial for targeting advertising, selecting posts for its apps’ feeds, and purging banned content from its platforms. To keep pace with rivals such as Microsoft and Alphabet’s Google, the company is hurrying to introduce generative AI products that can produce human-like writing, art, and other content.
The team previously employed at Graphcore had expertise in AI-specific networking technology, a crucial aspect for contemporary AI systems that are too large to be accommodated by a single computing chip and need to be distributed across multiple chips interconnected to one another. Meta is designing several kinds of chips to speed up and maximize efficiency for its AI work, including a network chip that performs air traffic control for servers and a complex computing chip for both training AI models and performing inference, which it expects to be ready by 2025.
Graphcore, which was once considered a potential challenger to Nvidia’s dominant position in the market for AI chip systems, suffered setbacks in 2020 when Microsoft cancelled an early deal to purchase Graphcore’s chips for its Azure cloud computing platform. Sequoia, a major investor in Graphcore, has written down its investment to zero. However, the company remains optimistic about its future prospects, believing that it is well-placed to capitalize on the increasing commercial adoption of AI.
Biden Holds Discussions with Microsoft and Google CEOs on the Potential Dangers of Artificial Intelligence
US President Joe Biden met with CEOs from top artificial intelligence (AI) companies, including Google and Microsoft, on Thursday and emphasized the need for product safety before deployment. The rise of generative AI technology, such as ChatGPT, has led to concerns about privacy violations, misinformation campaigns, and skewed employment decisions. The meeting focused on the importance of transparency with policymakers, evaluating the safety of AI products, and protecting them from malicious attacks. The Vice President and administration officials joined the two-hour discussion, announcing a $140 million investment in seven new AI research institutes and policy guidance from the Office of Management and Budget. The CEOs were urged to take legal responsibility for the safety of their products and collaborate on advancing new regulations and supporting new legislation on AI. The US government has fallen behind European governments in tech regulation, but the administration is working with the US-EU Trade & Technology Council to address these issues. In February, Biden signed an executive order to eliminate AI use bias; the Federal Trade Commission and the Department of Justice’s Civil Rights Division have pledged to fight AI-related harm.
Leave a Reply