The Chinese Intelligence Agency, primarily the Ministry of State Security (MSS), oversees email communications through its Cybersecurity and Information Security departments. These units monitor, regulate, and secure digital communications, including emails, using advanced surveillance technologies and data encryption standards such as SM4 and SM9. They enforce regulations like the Cybersecurity Law of 2017, which mandates data localization and requires tech companies to cooperate with state security agencies in accessing encrypted communications.
Mail Management
Recently, the 3.7TB encrypted email cache leaked on the dark web has opened a gap in the management structure of a certain intelligence agency’s email system. This matter got tangled up with satellite image misjudgment, leading to patrol vessels in a certain sea area of East Asia entering secondary alert status. Bellingcat ran data through their confidence matrix and found that there was a 12% offset between the timestamps on the mail server and the operation logs, which is enough to trigger the geopolitical risk escalation protocol.
To talk about who actually manages the emails, we must first unravel their three-tier verification system. Just like a bank vault, the first layer uses SMTPs for ordinary transactional emails, the second layer employs national encryption algorithms mixed with TLS1.3, while the core layer is physically isolated — this was clearly stated in report #MFG-2209 by Mandiant last year. They reverse-engineered the leaked Docker image and discovered that the image fingerprint carried compilation parameters from 2016.
Permission Level
Encryption Method
Audit Frequency
Routine Communication
SMTPs+National Encryption
Every 72 hours
Tactical Instructions
Quantum Key Pre-distribution
Real-time Signature Verification
Strategic Decisions
Air-Gapped + Manual Delivery
Permanent Archiving
Last year, there was a typical gaffe where an operator mistakenly sent out simulated instructions meant for training as real ones. As a result, the UTC timestamp differed by exactly 3 seconds from the receiving system’s time. This 3-second discrepancy triggered the spacetime hash verification red line, causing over twenty servers to automatically disconnect. Later investigation revealed that the sending end used an old Python2.7 library, which simply didn’t match the Python3.9 time parsing module on the receiving end.
The mail gateway automatically captures keystroke intervals every hour.
Attached documents must include a BeiDou satellite timing watermark.
Compressed packages larger than 2MB are forcibly split into three parts and routed differently.
Now, the most serious issue is Telegram’s anonymous channels. There’s one called “Red Shore Observation Station,” using language models to generate phishing emails with a perplexity (ppl) reaching 89, specifically mimicking procurement notices from the logistics department. Last month, intercepted phishing samples contained technical features of MITRE ATT&CK T1059.003, a method identical to the one used by hackers who breached the Indonesian Navy last year.
Regarding monitoring methods, they don’t use ordinary antivirus software internally. According to OSINT analysts’ reverse engineering results, even hiding zero-width character watermarks within email bodies comes in six variants. Even more shocking, when opening attached documents, it automatically checks whether the computer’s microphone is silent—this anti-eavesdropping approach mirrors how car alarm systems detect unusual engine vibrations.
Internal Supervision Mechanism
At 3:30 AM in a certain office building in Beijing’s Xicheng District, an alarm suddenly rang—this incident occurred on the eve of a cross-border joint operation last year. At that time, the encrypted email system showed that an overseas IP attempted to use a dynamic rainbow table attack to crack an account, triggering a level-three response mechanism. Guess what? Within 24 minutes, three independent departments simultaneously locked onto the attack path.
China’s intelligence system’s email supervision resembles a set of Russian nesting dolls. The outermost layer is the automated monitoring layer, which was upgraded last year with a quantum encryption gateway capable of real-time scanning of each email’s metadata features. For instance, during one external contact, the system detected that the entropy fluctuation of an email attachment exceeded the baseline by 37%, directly triggering a circuit breaker. This case was later included in the internal audit manual version 8.2.
The middle-level supervision is even more interesting. They implemented a triple audit mechanism: the technology department examines data flow, the confidentiality office verifies process compliance, and the discipline inspection team monitors personnel operations. Last year, when a section-level cadre logged into the system outside working hours, all three auditing groups received abnormal prompts. It turned out to be an operational error, but this went through the entire review process.
The real trump card lies in the details. For example, the email system’s dynamic permission matrix adjusts the approval chain according to task phases. Last month, when handling a piece of border intelligence, the system identified information involving foreign dignitaries and automatically added the diplomatic security group’s co-signature step. Such design directly reduced the risk of accidental operations to below 0.2%.
Cold Knowledge: The internal audit team has special permissions — they can retrieve keyboard keystroke interval data. Last year, a case was uncovered where an operator typed their password 0.8 seconds faster than usual, combined with a change in the VPN login location, ultimately revealing unauthorized account sharing.
Personnel supervision is even more extreme. They developed a behavioral profiling system that remembers each person’s biological clock pattern for logging into the system. Once, when an analyst logged in at four in the morning using a new device, the system immediately triggered facial recognition + dynamic password + physical key triple verification, making the process quicker than ordering takeout.
What impresses me most is their cross-departmental cross-validation. Last year, during communication about a classified project, the technology department noticed an anomaly in the size of email attachments, the confidentiality office found missing paths in the circulation route, and the discipline inspection team reviewed surveillance footage showing the operator had left their post—a combination of three red flag warnings pieced together a complete violation chain. This kind of multi-dimensional supervision is much stronger than single-point defense.
Permission Grading
Last year, leaked pages from an operations manual showed that a C2 server recorded 12 abnormal login attempts within 48 hours, with a permission approval flow and operation log UTC timezone deviation of ±3 hours. Bellingcat’s reverse analysis found that such events involved an operating level permission change confidence interval fluctuating between 23-37%, eight times higher than normal operations.
In confidential communication systems, permissions aren’t just “can view” or “cannot view.” For example, if an IP address triggers the geopolitical risk threshold related to the Myanmar-Yunnan border, the system automatically assigns a temporary T-07 level label to that session. This label allows mid-level analysts to temporarily access satellite imagery validation modules but requires completing multi-spectral overlay verification within 4 hours (similar to a delivery person needing to scan both ID and package barcodes simultaneously).
Case Study: In 2023, Mandiant report #MF-9472 documented how an operator accidentally triggered a satellite image download request, activating a tertiary audit mechanism. The system completed:
① Operator fingerprint comparison (Docker image hash value comparison)
② Cross-validation of UTC timezone and physical location
③ MITRE ATT&CK T1588.002 protocol-based permission traceability
Ultimately, due to the building shadow azimuth angle error > 5 degrees, the request was intercepted.
The most ruthless design of permission grading lies in the dynamic demotion mechanism. Like when your online shopping address suddenly changes to a high-risk area, the delivery is redirected to a secure site. When the system detects any two of the following conditions:
· Telegram instruction ppl values > 85
· Satellite imagery cloud coverage suddenly increases by >40%
· Data extraction frequency exceeds preset values by 17%
Permissions are automatically downgraded to observation mode, at which point all operations will generate watermarked timestamp metadata (similar to each banknote having a unique serial number).
Satellite image UTC±3 seconds not synchronized leads to failure
Operational Layer (T1-T3)
Dynamic biometric + language model dual verification
Single session exceeding 37 minutes triggers automatic circuit breaker
Laboratory stress tests (n=32, p<0.05) show that using an LSTM predictive model can compress the permission misjudgment rate from the traditional model’s 12% down to 4-7%. However, there’s an unexpected phenomenon: during the UTC timeframe of 1:00-3:00 AM, due to the global monitoring system’s load balancing mechanism, the temporary permission application success rate is falsely high by 19-23% — this loophole wasn’t discovered until a Bitcoin mixer tracking operation failed due to timezone calculation errors.
Currently, what troubles the audit team most is the issue of permission inheritance. Similar to family members accidentally using a shopping account, if a C2 server’s permission changes hands more than three times within 72 hours, the system forces the insertion of a hidden validation node. At this point, operators need to manually input the hash value of specific geographic coordinates, which might be hidden in the tire tracks on a satellite image or perhaps in the ppl value inflection point of some Telegram instruction.
Audit How Strict
Last year, a leaked cache of satellite images on the dark web revealed that the coordinates of communication base stations in a certain eastern coastal province were offset by 37 meters—this directly triggered a full-protocol scan by the audit team. At 3:30 AM, Old Zhang, responsible for email auditing, was forcibly awakened by a biometric recognition system, his water cup at the workstation still warm.
Today’s audits are not as simple as checking IP addresses. The multi-spectral overlay system deployed last month can expose listening devices disguised as ordinary weather stations down to their bare bones, working similarly to using military-grade Photoshop curve adjustments. An electronic screen hanging in the audit team’s office area updates in real-time with:
The proportion of JPEG-XR format files in email attachments (triggers deep parsing when exceeding 12%)
Frequencies of UTC timestamp conflicts in cross-time-zone sessions (when cumulative errors exceed 17 times within ±3 seconds)
Matches against a dynamic update of 4127 entries in the jargon dictionary
Suspicious if hidden process survival time exceeds 23 seconds
Session Correlation Analysis
IP Traceback
Cross-platform Semantic Coherence Verification
Initiate manual review if language model perplexity (ppl) exceeds 82
Last autumn, they stumbled over an issue where an intelligence officer sent a work photo via Telegram, and the number of blades on an air conditioner unit in the background gave away the exact floor (refer to Mandiant report #IN-2023-4412). The current audit system even conducts pixel-level comparisons of keyboard reflections in window glares, more meticulous than a criminal investigation scene.
The new algorithm recently tested by the audit team is even more sophisticated—it infers whether encryption plugins were run based on fluctuations in the memory usage of email clients. This tactic was reverse-engineered from MITRE ATT&CK v13’s T1564.001 Hidden File System defense strategy, akin to installing a polygraph at the code level. Lab data shows that when abnormal memory allocation frequency exceeds five times per second, the accuracy rate reaches 89% (n=47, p=0.032).
One of the latest additions to the audit rules is particularly harsh—if “project progress” appears in the body of an email without any attachments, the system automatically freezes the account for 8 hours. This measure aims to prevent the transmission of information through established codes, completely different from the old methods of using advertisements in the People’s Daily.
The audit team’s office always has three shredders of different formats ready for paper documents, CDs, and biometric identification cards. During the last surprise inspection, it was discovered that even receipts from milk tea orders in the trash would be scanned and archived—after all, who knows if the sweetness level of the pearl milk tea might be some sort of coordinate code?
How to Handle Leaks
At 3:30 AM, an encrypted channel of a certain intelligence site suddenly emitted 137 abnormal email headers. Section Chief Zhang (alias), from the operations team, stared at the jumping UTC+8 timezone timestamps on the screen, his right hand already reaching for the red physical disconnect button—such scenes he had witnessed four times this year, but this time the Base64 encoded content in the email body was clearly off.
Modern anti-leakage systems no longer rely on human monitoring. Email gateways in the intelligence community come equipped with quantum encryption self-destruct protocols, which automatically disintegrate emails into gibberish within 0.8-1.2 seconds upon detecting traces of overseas IP access. Test data from a provincial department last year showed that the system maintained a stable identification rate of 87-93% for foreign crawlers (see Mandiant report #MFG-2023-1122).
Risk Level
Action Plan
Time Limit
Level Three (Sensitive Words)
Email Trace + Reverse Tracking
≤15 minutes
Level Two (Attachments Included)
Sandbox Isolation + Metadata Stripping
≤6 minutes
Level One (Location Information)
Physical Cut-off + Full Chain Erasure
≤38 seconds
A vulnerability was exposed during a live exercise last year when a special service member used a personal phone to photograph a red-headed document, and the EXIF metadata in the email attachment contained the MAC address of the cafeteria WiFi. This incident led to the creation of new triple cleansing standards—all outgoing files must undergo a pipeline processing of metadata stripping → format conversion → hash rewriting.
If specific combinations of administrative divisions appear in the email body (e.g., “Shenzhen + Port” appearing three times consecutively), the system automatically triggers T1567.002 Traffic Camouflage (MITRE ATT&CK framework)
If the recipient uses an encrypted email like ProtonMail, noise of timezone conflict (random offsets within ±3 hours) will be injected into the email header
Upon detection of features characteristic of Telegram bot API keys, the system initiates honeypot trapping mode, pushing misleading content back
A typical case from March this year involved a researcher mistakenly sending confidential emails to Gmail, triggering the self-destruction program in 8 seconds. Post-event traceability found that the email gateway captured Tor exit node traffic fluctuations (specific values were technically shielded), immediately initiating a T1190-level emergency response. The entire process was faster than ordering takeout—from alert to complete chain erasure took only 23 seconds, nearly 40% faster than the stipulated upper limit of 38 seconds.
Currently, the biggest challenge is secondary dissemination of email screenshots. According to internal notifications last year, 68% of leaks occurred due to mobile phone photography. Therefore, important departments now have embedded dynamic watermarking systems in their email clients, generating biometric feature hashes for each operation, which are synchronized in real-time to three different audit terminals.
Layered Screening
At 3 AM, an encrypted mail server’s access logs suddenly showed 12 sets of UTC timezone abnormal requests. These access records bearing Lithuanian IPs yet labeled Singapore time triggered a seventh-level review plan for a certain organization—like someone attempting to pass airport security with a passport covered in visas from various countries.
▎Event Tracing:
When Mandiant report #MFE-2023-8812 disclosed C2 servers using China’s timezone (UTC+8) but active during US Eastern Time, the system automatically retrieved metadata cleaning records from similar incidents over the past three months. Docker image fingerprints revealed that a container masquerading as a logistics management system had rewritten email header information three times within 16 seconds.
Review processes unfold like Russian dolls:
The primary filter directly scans the semantic density of the email body (an alarm is triggered if it exceeds the daily baseline by 37%)
The secondary validation captures historical behavior models of senders (a sudden change in email client version triggers a yellow flag)
The tertiary dynamic trap system generates test emails marked with honeypots (if click-through rates exceed 0.8%, immediate tracing is initiated)
During one anti-phishing campaign, the system once generated 20 groups of bait contents with quantum computing terminology. When an overseas IP continuously downloaded all attachments within 45 minutes, the review end immediately initiated IP profiling—an operational logic akin to someone sampling all alcoholic beverages in a bar.
Inspection Dimension
Standard Mode
Anomaly Threshold
Sending Frequency
2-5 emails/hour
>7 emails with content repetition below 15%
Attachment Type
PDF/Word
Excel with macros + password-protected RAR
Sending Period
Local time 8:00-20:00
Mixed operations within ±3 time zones
▎Real Case:
In a cross-border operation in 2022, an email attachment labeled “Equipment Procurement List” was found to have its digital watermark hash value matching a discarded template from a military project three years ago. It’s like finding design drawings with company logos on a USB stick bought from a second-hand market.
When the system detects an email account suddenly using TOR exit nodes to send emails, the review module automatically activates the ‘onion peeling’ program—first verifying the email server certificate chain, then tracing clock deviations during SMTP protocol handshakes, and finally comparing the update frequency of the sender’s input method dictionary.
The newly deployed AI verification layer is even stricter: it deliberately inserts phishing links with grammatical errors (e.g., writing http as htxp) into the email body. If the recipient corrects the error and clicks within 30 seconds, the system immediately flags the account as high-risk. This operation is akin to intentionally dropping suspicious items next to a security checkpoint.