So it’s already all over the news, with OpenAI’s recent allegations against DeepSeek. (It took a while to find a news link that wasn’t hidden under a paywall).
Author Archive
The recent Romanian election presents a fascinating case study of modern digital election interference, with many ramifications, but particularly through social media manipulation.
What’s even more interesting is the ongoing development that seems to never end.
While researching the cybersecurity aspect of the incident, the analysis went touched a bit of the political side as well.
The Manufactured Viral Campaign
The striking aspect was the sophisticated manipulation of TikTok’s platform. Candidate Calin Georgescu’s sudden rise was engineered through a network of 25,000 coordinated accounts.
What makes this operation particularly noteworthy is its technical sophistication, as mentioned by SRI: each account operated from a unique IP address, making traditional bot detection nearly impossible.
Following the Money Trail
Despite Georgescu’s claims of running a zero-budget campaign, financial investigations revealed a different story. Through the FameUP platform, approximately $381,000 was spent between October and November 2024. This funding went toward a coordinated influencer campaign, with individual influencers receiving up to €1,000 per pre-made video share.
Technical Infrastructure Attacks
Beyond social media manipulation, the election faced direct infrastructure challenges. The Romanian Intelligence Service (SRI) documented over 85,000 cyber attacks, along with breach on several institutions website.
Broader Implications
This case demonstrates how modern election interference combines social media manipulation with traditional cyber attacks. The operation’s sophistication – from unique IP addresses to multi-layered influencer campaigns – suggests state-level resources and planning.
The investigation is still ongoing and hopefully we will learn more.
The EU’s Digital Services Act (DSA) faces its first major test with this incident.
Time passing is always constant, at least in the Newtonian classical world.
Two days ago, on November 18, let’s encrypt had their 10-year anniversary.
Continuing from last month’s progress report. Ever since lowering the writing pace and reconsidering my approach to the cybersecurity blog, I feel a sort of calm (for now at least).
This month, I’ve been diving deep into how AI is reshaping software development lifecycles. After analyzing Stack Overflow’s latest survey and correlating it with real-world implementations and a few research papers, something interesting emerged: we’re asking the wrong questions again, just as we did with privacy-focused applications.
I’ve focused this month on researching the fall of Bohemia/Cannabia, the notorious dark web markets. This led me to several Finnish investigations and of course, it was a good time to read again about the first dark web market that was cracked down more than a decade ago: The Silk Road.
Less than a month flew by since the previous update, and I felt more progress than in the entire year since I’ve gotten back into content creation. A large part of this was due to the snowball effect of writing down drafts and blog ideas.
38% of AI-Using Employees Admit to Sending Sensitive Work Data, Says National Cybersecurity Alliance 0
Recently wrote about a recent survey by the National Cybersecurity Alliance (NCA) and CybSafe that revealed this information.
Another good read on the topic is: AI hype as a cyber security risk: the moral responsibility of implementing generative AI in business.
Going back to the survey, if we consider the social desirability bias, then this number may be well above 50%. The social desirability bias occurs when respondents alter their answers to be viewed more favorably by others, often due to fear of judgment or legal repercussions.
This mostly occurs for example in research conducted in countries where drugs are banned, and participants may underreport or avoid disclosing drug use to align with societal norms or legal constraints. In turn, leading to skew results.
A similar effect could be seen when answering questions about whether they are inputting sensitive data into GPT models, at work.
Another notable result, aside from the data privacy issue, concerns usage. The younger generation has a higher adoption rate compared to the elderly. Unfortunately, aside from usage, trust in these systems also seems to be quite high.
The disclaimer “ChatGPT can make mistakes. Check important info.” appears to be similar to the warning messages on cigarette packs, few people read or pay attention to them. We are still in the early adoption phase, with many experts in their fields using LLMs to augment their work or become more proficient overall. But what awaits us next, several generations ahead, wnhen LLMs could become single point of truth?
Photo by DC Studio on Freepik.
Months have flown by since my last update in April. It’s now September 20th.
This year, January went fast, my honeymoon concluded, and February arrived with surprising swiftness as well. While it didn’t bring much time to settle in, it did however inspire a renewed passion for writing.