Skip to content

Commit 9bf10ff

Browse files
authored
refactor: resize protect ai 6 months headers (#2816)
Resizing the headers as they are a bit big in the current blog post
1 parent c69ee3f commit 9bf10ff

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

Diff for: pai-6-month.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ authors:
1313

1414
Hugging Face and Protect AI partnered in [October 2024](https://protectai.com/blog/protect-ai-hugging-face-ml-supply-chain) to enhance machine learning (ML) model security through [Guardian’s](https://protectai.com/guardian) scanning technology for the community of developers who explore and use models from the Hugging Face Hub. The partnership has been a natural fit from the start—Hugging Face is on a mission to democratize the use of open source AI, with a commitment to safety and security; and Protect AI is building the guardrails to make open source models safe for all.
1515

16-
## 4 new threat detection modules launched
16+
### 4 new threat detection modules launched
1717

1818
Since October, Protect AI has significantly expanded Guardian's detection capabilities, improving existing threat detection capabilities and launching four new detection modules:
1919

@@ -28,17 +28,17 @@ With these updates, Guardian covers more model file formats and detects addition
2828
|:--:|
2929
|***Figure 1:** Protect AI’s inline alerts on Hugging Face*|
3030

31-
## By the numbers
31+
### By the numbers
3232

3333
**As of April 1, 2025, Protect AI has successfully scanned 4.47 million unique model versions in 1.41 million repositories on the Hugging Face Hub.**
3434

3535
To date, Protect AI has identified a total of **352,000 unsafe/suspicious issues across 51,700 models**. In just the last 30 days, Protect AI has served **226 million requests** from Hugging Face at a **7.94 ms response time**.
3636

37-
# **Maintaining a Zero Trust Approach to Model Security**
37+
## **Maintaining a Zero Trust Approach to Model Security**
3838

3939
Protect AI’s Guardian applies a zero trust approach to AI/ML security. This especially comes into play when treating arbitrary code execution as inherently unsafe, regardless of intent. Rather than just classifying overtly malicious threats, Guardian flags execution risks as suspicious on InsightsDB, recognizing that even harmful code can look innocuous through obfuscation techniques (see more on payload obfuscation below). Attackers can disguise payloads within seemingly benign scripts or extensibility components of a framework, making payload inspection alone insufficient for ensuring security. By maintaining this cautious approach, Guardian helps mitigate risks posed by hidden threats in machine learning models.
4040

41-
# **Evolving Guardian’s Model Vulnerability Detection Capabilities**
41+
## **Evolving Guardian’s Model Vulnerability Detection Capabilities**
4242

4343
AI/ML security threats are evolving every single day. That's why Protect AI leverages both in-house [threat research teams](https://protectai.com/threat-research) and [huntr](https://huntr.com)—the world's first and largest AI/ML bug bounty program powered by our community of over 17,000 security researchers.
4444

@@ -48,7 +48,7 @@ Coinciding with our partnership launch in October, Protect AI launched a new pro
4848
|:--:|
4949
|***Figure 2:** huntr’s bug bounty program*|
5050

51-
## Common attack themes
51+
### Common attack themes
5252

5353
As more huntr reports come in and more independent threat research is conducted, certain trends have emerged.
5454

@@ -60,7 +60,7 @@ As more huntr reports come in and more independent threat research is conducted,
6060

6161
**Attack vector chaining**: Recent reports demonstrate how multiple vulnerabilities can be combined to create sophisticated attack chains that can bypass detection. By sequentially exploiting vulnerabilities like obfuscated payloads and extension mechanisms, researchers have shown complex pathways for compromise that appear benign when examined individually. This approach significantly complicates detection and mitigation efforts, as security tools focused on single-vector threats often miss these compound attacks. Effective defense requires identifying and addressing all links in the attack chain rather than treating each vulnerability in isolation.
6262

63-
# **Delivering Comprehensive Threat Detection for Hugging Face Users**
63+
## **Delivering Comprehensive Threat Detection for Hugging Face Users**
6464

6565
The industry-leading Protect AI threat research team, with help from the huntr community, is continuously gathering data and insights in order to develop new and more robust model scans as well as automatic threat blocking (available to Guardian customers). In the last few months, Guardian has:
6666

@@ -76,7 +76,7 @@ The industry-leading Protect AI threat research team, with help from the huntr c
7676

7777
**Provided deeper model analysis:** Actively research on additional ways to augment current detection capabilities for better analysis and detection of unsafe models. Expect to see significant enhancements in reducing both false positives and false negatives in the near future.
7878

79-
# **It Only Gets Better from Here**
79+
## **It Only Gets Better from Here**
8080

8181
Through the partnership with Protect AI and Hugging Face, we’ve made third-party ML models safer and more accessible. We believe that having more eyes on model security can only be a good thing. We’re increasingly seeing the security world pay attention and lean in, making threats more discoverable and AI usage safer for all.
8282

0 commit comments

Comments
 (0)