Edit 2025-04-09 16:42Z - article was updated with a tenth package (Prettier - Code)
A set of ten VSCode extensions on Microsoft’s Visual Studio Code Marketplace pose as legitimate development tools while infecting users with the XMRig cryptominer for Monero.
ExtensionTotal researcher Yuval Ronen has uncovered ten VSCode extensions published on Microsoft’s portal on April 4, 2025.
The package names are:
- Prettier - Code for VSCode (by prettier) - 486K installs
- Discord Rich Presence for VS Code (by
Mark H
) - 189K installs- Rojo – Roblox Studio Sync (by
evaera
) - 117K installs- Solidity Compiler (by
VSCode Developer
) - 1.3K installs- Claude AI (by
Mark H
)- Golang Compiler (by
Mark H
)- ChatGPT Agent for VSCode (by
Mark H
)- HTML Obfuscator (by
Mark H
)- Python Obfuscator for VSCode (by
Mark H
)- Rust Compiler for VSCode (by
Mark H
)
Let this be a lesson. We should never, ever, use software.
Software is the leading cause of all computer viruses.
All people who use software will fucking die, smh.
Such a sack of shit that Mark H.
im glad his hand got chopped off
Oh Hi Mark
HTML Obfuscator
Fucking what
Maybe to build one of those shitty websites where you can’t select text because every letter is in its own element.
Just that? An open source HTML minifier probably bundled with a miner.
Minification isn’t the same as obfuscation, though. The only way I can think to obfuscate HTML would be to replace every element with a custom element.
Minification is a form of obfuscation. It makes it (much) less readable.
Of course you could run a formatter over it. But that’s already an additional step you have to do. By the same reasoning you could run a deobfuscator over more obfuscated code.
That is true!
But one could make up all kinds of tactics. Especially with the help of css styles inside the document. For example: add random crap, make it invisible. Make the real content hard to see or find in the document. Why though? I don’t know! Now I am kind of curious to know what it did, if anything.
Yo, @drspod@lemmy.ml, check the article again. Prettier, a very popular extension, heads the list now:
Prettier — Code for VSCode (by prettier) – 955K Installs Discord Rich Presence for VS Code (by Mark H) – 189K Installs Rojo — Roblox Studio Sync (by evaera) – 117K Installs Solidity Compiler (by VSCode Developer) – 1.3K Installs Claude AI (by Mark H) Golang Compiler (by Mark H) ChatGPT Agent for VSCode (by Mark H) HTML Obfuscator (by Mark H) Python Obfuscator for VSCode (by Mark H) Rust Compiler for VSCode (by Mark H)
Thanks, I’ve updated the description text.
Such tricks were was predictable, as VSCode extensions, letting arbitrary JS run on your system, are an obvious security risk.
Recently I used Zed editor instead, it’s smooth, but this also has extensions, only these are fewer and in rust ( maybe a higher barrier, targeting less users, so far… ). What’s the solution here - is there some intrinsically safer sandboxed system ?The collaborative sharing nature of these platforms is a big advantage. (Not just VS Code Marketplace. We have this with all extension and lib and program package managers.)
Current approaches revolve around
- reporting
- manual review
- automated review (checks) for flagging or removal
- secured naming spaces
The problem with the latter is that it is often not necessarily proof of trustworthyness, only that the namespace is owned by the same entity in its entirety.
In my opinion, improvements could be made through
- better indication of publisher identity (verified legal entities like companies, or of persona, or owned domain)
- better indication of publisher trustworthiness (how did they establish themselves as trustworthy; long running contributions in the specific space or in general, long standing online persona, vs “random person”, etc)
- more prominent license and source code linking - it should be easy to access the source code to review it
- some platforms implement their own build infrastructure to ensure the source code represents the published package
Maybe there could be some more coordinated efforts of review and approval. Like, if the publisher has a trustworthiness indication, and the package has labeled advocators with their own trustworthiness indicated, you could make a better immediate assessment.
On the more technical side, before the platform, a more restrictive and specific permission system. Like browser extensions ask for permissions on install and/or for specific functionality could be implemented for app extensions and lib packages too. Platform requirements could require minimal defaults and optional things being implemented as optional rather than “ask for everything by default”.
In principle I’d like to see specific permissions - so for example playing with gui enhancements should be a lower trust barrier than adjusting and running code, but afaik (correct me if wrong) neither js nor rust have a built-in security architecture that could implement this. Maybe certain types of extensions could just be custom script language without filesystem access, but that’s harder to do.
About source code linking, last time I heard (maybe they fixed it?) it seemed that trick vscode extensions can link to arbitrary (safe-looking) source repos, which didn’t actually produce the extension.
I’m less convinced about slowly accumulating publisher trust, as this could be a barrier to honest new contributors, while big actors with a longterm profit or geopolitical motive could game such a system anyway (as they do for social media).
I do trust the scala tools (build Mill, lang-server Metals, compiler) which adjust my code, having seen them evolve over many years.
and like the separation of functions (lang-server / editor), so we are less dependent on any one big-tech solution. So I suppose a fundamental issue is what to trust less - big corps with a reputation but lock-in power, or an ecosystem of small contributors which might include tricksters. No perfect balance.
The more sandboxed the extension system, the less powerful it is.
You either have an entity that approves of extensions. Or your users have to be very careful and trusting of other people. There’s no other way.
I can’t imagine a sandbox would help. what can an an extension do that doesn’t touch some arbitrary code that gets run? it could add a line to the middle of a giant file right before you run and remove it immediately after. even if you run the whole editor in a sandbox you do eventually deploy that code somewhere, it can change something inconspicuous like a url in a dependency file that might not get caught in a pr
the only solution is to audit everything you install, know all the code you run, etc. ofc that’s not reasonable, but idk what else there is. better automated virus check things maybe? identity verification for extension publishers? idk if there’s an actual solution
It seems so far Zed is cautious, providing api only for specific extensions - i.e. language servers and gui themes.
add a line … right before you run it
I run stuff from the command line using a trusted build tool (Mill, in scala), or via a local server (where js is sandboxed).
But indeed, a tricky language server or AI tool (I don’t use yet) might inject code where I don’t inspect before running it. That’s a risk even with java-based IDEs - java has security permissions, not in js (vscode) or rust (zed), but are they applied…? As for audits, a problem with vscode is the marketplace got too big, so many extensions, many lookalikes, nobody can check them all…well the language server plugins all run a binary language server out of sandbox so zed doesn’t really do anything safer in particular there either. no ide has solutions, solutions don’t really exist right now. it’s not a problem of features of the language as much as it is features developers expect in extensions. I suppose there is a hypothetical “the extension wants to make this change to this file, approve” type flow like AI tools have now, but that sounds unpleasant to use. it still doesn’t get around things like language servers being designed to run as standalone processes out of sandbox.
by audits I meant you individually go and read all the code of all the extensions you use. of course that’s impossible too, but that was my point
Might be one thing AI tools will be super useful for, if it’s possible to teach it what types of code are potentially malicious and able to automatically flag it for review AT LEAST.
Microsoft and macro viruses, name a more iconic duo.
Can your Linux do that?
.
.
.
.
.
.
.
.
Yes, of couse it can, all of that and even more!
Kate >>> VSCode
Who is Kate?
Kate the editor? Or is there an IDE called Kate?
shocker. an electron app that’s terrible and full of malware.
Don’t think it has anything to do with electron. VSCode is just the largest editor that people install extensions for, so it’s what makes the most sense to write malware for. If vim was more popular, I’m sure there would be more crypto mining extensions for that (I wonder how many there are? Surely more than zero?)
It also helps its as easy as clicking a button to install an extension… and i wonder how many even bother checks the source of the extension?
It’s also kinda “trusted” in corporations. The one I work for have github blocked for some reason, but any user cam install VS code and extensions by itself.