Partner-led cybersecurity & AI governance

Strategic security.
Transparent oversight.
Trusted impact.

TeraType works with executive teams that demand clarity, actionable controls, and evidence that withstands scrutiny. We design governance, risk, and assurance programs built for audit, due diligence, and regulatory review.

Read the briefings Offerings Contact

Precision work, built to hold up under scrutiny.

Evidence discipline
Controls mapped to artifacts auditors accept on first review
Integration rigor
Scoped access, ephemeral credentials, monitored data paths
Decision rhythm
Consistent cadence for exceptions, risk acceptance, closure
Operational signal
Logging and telemetry that supports real investigations

Signals we track

Recurring themes in due diligence, incident response, and regulatory enforcement

April 2026
Prompt injection exposure
Critical ↑
Model weight leakage risk
High ↑
Non-human identity sprawl
Critical
EU AI Act readiness gap
High ↑
SEC disclosure compliance
Moderate ↑
Cloud credential dwell time
High
4mo
Until EU AI Act high-risk enforcement
340%
YoY increase in prompt injection attacks
14
New state privacy laws active in 2026
8.2d
Median cloud credential theft dwell time
Executive Intelligence

Briefings for Boards and Leadership

Strategic context for leadership conversations. April 2026 edition.

AI Security Regulation Disclosure Identity
AI Security — New April 2026
Prompt injection attacks surge 340% — indirect attacks now the primary vector

Attackers are embedding malicious instructions in documents, emails, and web content that AI systems retrieve and execute. Recorded Future reports a 340% year-over-year increase, with indirect prompt injection now accounting for 73% of successful AI system compromises.

Why it matters: Unlike direct prompt injection (attacking via user input), indirect injection exploits retrieval-augmented generation (RAG) systems. When an AI reads a poisoned document or webpage, embedded instructions override system prompts — enabling data exfiltration, unauthorized actions, and lateral movement without user awareness.
Action: Implement input sanitization on all retrieved content before it reaches the model. Treat external documents, web scrapes, and API responses as untrusted data requiring validation.
Action: Establish output filtering with explicit allow-lists for sensitive operations. No AI system should execute file access, data transmission, or API calls based solely on generated output without human confirmation.
Action: Deploy prompt firewalls that detect instruction-like patterns in retrieved content. Monitor for anomalous system prompt overrides and log all reasoning chains for post-incident analysis.
For boards: Ask which AI systems retrieve external content, what validation occurs before ingestion, and whether output actions require human approval. Ask who owns the response when a poisoned document causes a data incident.
AI Security — New April 2026
Model weight exposure: 47% of enterprise deployments affected by OpenAI leak

Wiz Security's April 2026 analysis reveals that 47% of enterprise AI deployments using OpenAI-derived models have potential exposure to leaked model weights. Organizations using fine-tuned or locally hosted variants face the highest risk of adversarial model extraction and training data inference attacks.

Why it matters: Leaked model weights enable attackers to perform membership inference (determining if specific data was used in training), extract proprietary prompts and system instructions, and craft targeted adversarial examples that bypass safety guardrails with near-perfect success rates.
Action: Inventory all AI models in production and classify by deployment type: API-only, locally hosted, fine-tuned, or embedded. Locally hosted and fine-tuned models carry the highest exposure to weight extraction attacks.
Action: Implement model access controls with authentication, rate limiting, and query logging. Anomalous query patterns (systematic probing, gradient estimation attempts) are indicators of model extraction in progress.
Action: For high-sensitivity deployments, consider differential privacy during training and watermarking techniques that enable leak attribution. Post-deployment monitoring should detect unusual inference patterns.
For boards: Ask which models are hosted internally versus accessed via API, what controls prevent unauthorized weight extraction, and whether model procurement includes leak notification obligations from vendors.
Disclosure — New April 2026
SEC cybersecurity disclosure rules: first enforcement actions signal materiality threshold

The SEC announced its first enforcement actions under the new cybersecurity disclosure rules in March 2026. The cases establish that "material" incidents include unauthorized access to customer data affecting >2% of the customer base, ransomware impacting core operations for >48 hours, and third-party breaches exposing credentials with financial system access.

Why it matters: The 4-day disclosure window (Form 8-K Item 1.05) begins when the registrant determines an incident is material — not when it is fully remediated. Delayed disclosure carries significant penalties: fines, officer certifications under scrutiny, and potential securities fraud liability.
Action: Establish materiality criteria with legal, finance, and security leadership. Document the decision process for every incident classification, including factors considered and who participated in the determination.
Action: Test incident response timelines end-to-end, including the disclosure decision path. A 4-day window requires pre-approved language templates, communication protocols, and executive availability for sign-off.
Action: Validate that third-party incident notification SLAs align with the 4-day disclosure requirement. If a vendor cannot confirm breach scope within 72 hours, your disclosure timeline is at risk.
For boards: Ask for materiality criteria with specific thresholds, incident classification decision logs from the past 12 months, and whether disclosure timelines have been tested in tabletop exercises.
Regulation — Updated April 2026
EU AI Act: 4 months to high-risk enforcement — technical documentation is the test

August 2, 2026 brings full enforcement authority for high-risk AI systems and Commission investigatory powers over general-purpose AI models. The AI Office gains document request authority, model recall powers, mandated mitigation orders, and fines up to €35M or 7% of global turnover.

Why it matters: Technical documentation (Article 11) must exist before deployment — retrofitting after launch is insufficient. High-risk systems in hiring, credit decisioning, biometrics, healthcare, law enforcement, and critical infrastructure face immediate scrutiny.
Action: Complete AI system inventory and risk classification using Annex III criteria. Systems deployed before August 2 that lack conformity documentation are immediately exposed to enforcement action.
Action: Establish deployment gates with test records, human oversight mechanisms, rollback procedures, and change control for models and prompts. Every deployment decision requires dated approvals and supporting evidence.
Action: Implement post-market monitoring (Article 72) with defined performance metrics, drift detection, and serious incident reporting paths that connect to security operations.
For boards: Ask which AI systems influence decisions about individuals, whether each has technical documentation prepared to Article 11 standards, and whether post-market monitoring is operational today.
Regulation — New April 2026
DORA: ICT incident reporting thresholds finalized — time-based triggers now enforceable

The European Banking Authority finalized DORA ICT incident reporting thresholds on April 15, 2026. Financial entities must report major incidents within 4 hours of classification, intermediate reports at 72 hours, and final reports within one month. Service outages exceeding 2 hours for critical functions are automatically reportable.

Why it matters: The 4-hour initial notification window is shorter than most incident triage cycles. Organizations without pre-built classification criteria and notification workflows will miss deadlines, triggering supervisory action and potential enforcement.
Action: Map all ICT services to criticality tiers using the DORA framework. Define what constitutes "critical or important functions" with specific system names, not generic categories.
Action: Build incident classification playbooks with time-based triggers, impact thresholds, and decision trees that non-technical staff can execute. The 4-hour window assumes 24/7 classification capability.
Action: Test notification procedures including escalation, approval chains, and submission to competent authorities. Tabletop exercises should simulate weekends, holidays, and scenarios where incident scope is initially unclear.
For boards: Ask which ICT services are classified as critical under DORA, what the fastest incident classification time has been in the past year, and whether notification procedures have been tested end-to-end.
Identity — Updated April 2026
Cloud credential theft: 8.2-day median dwell time signals detection gaps

CrowdStrike's 2026 Global Threat Report identifies cloud credential theft as the fastest-growing initial access vector, with a median dwell time of 8.2 days between compromise and detection. Over-privileged service accounts, unmonitored API keys, and agent tokens lacking baseline usage patterns account for 68% of successful lateral movement.

Why it matters: An 8.2-day window provides ample time for privilege escalation, data exfiltration, and persistence mechanism establishment. Attackers prioritize credentials with broad permissions and no anomaly detection — service accounts and API keys meet both criteria.
Action: Build a non-human identity inventory across all cloud platforms. Classify each credential by privilege level, business owner (not just system owner), and last review date.
Action: Implement usage baselines for machine identities with alerts on anomalous behavior: unusual geo-locations, API call patterns, permission escalation attempts, or access outside normal operational windows.
Action: Apply just-in-time access principles to high-privilege service accounts. Long-lived credentials with standing admin permissions are the highest-value targets for attackers.
For boards: Ask how many machine identities exist across cloud environments, what percentage have baseline usage monitoring, and what the current backlog is for over-privileged credential remediation.
What we do

Focused offerings for complex environments

Work designed for careful review, long-term reuse, and evidence that travels.

Executive vCISO and governance

Board-ready
  • Executive risk reporting with clear narratives and metrics boards can act on
  • Policies, standards, and control frameworks aligned to how teams operate
  • ISMS and PIMS build-out, internal audit, and certification support
  • M&A diligence and integration for security, privacy, and AI programs

Assurance and compliance

Evidence
  • ISO 27001, 27017, 27018, and ISO 27701:2025 transition planning
  • SOC 2 readiness with continuous evidence and control ownership
  • PCI DSS 4.0 scope, segmentation, and Report on Compliance preparation
  • HIPAA safeguards plus Data Processing Addendums and Business Associate Agreements aligned with practice

Threat, detection, and response

Operational
  • Curated detections, tuning, and noise suppression across cloud and endpoint
  • Attack simulations, red teaming, purple teaming, incident response playbooks
  • Post-incident reviews with owners and due dates that stick
  • Exercises that include executive decision-making and communications
AI Governance

Operational AI that stands up to review

Controls for inventories, gates, testing, monitoring, and oversight. Built for August 2026.

Framework

ISO/IEC 42001
AI Management System

The international standard for AI governance. Defines scope, roles, lifecycle controls, and continuous improvement cycles for AI systems at an organizational level.

Operating model Risk register Internal audit
Inventory & classification
Every AI system named, risk-tiered, and owner-assigned before it is used in production
EU AI Act Art. 6 · ISO 42001 §8
Intake gates
Test criteria, rollback rules, and change control for models, prompts, and configurations
ISO 42001 §8.4 · NIST AI RMF MAP
Post-market monitoring
Defined signals, drift detection, and an incident path aligned to security operations
EU AI Act Art. 72 · ISO 42001 §9
Human oversight
Oversight mechanisms that go beyond nominal — including automation-bias controls
EU AI Act Art. 14 · ISO 42001 §6
Technical documentation
Conformity documentation and logs that reviewers — and regulators — will actually accept
EU AI Act Art. 11–13 · GPAI obligations
Third-party AI due diligence
Procurement controls for models, datasets, and AI-enabled SaaS tools
ISO 42001 §8.6 · NIS2 supply chain

Secure models and agents

Hardening
  • Input sanitization, prompt firewalls, and output validation controls
  • Guardrails, allowlists, and policy-aligned configurations
  • Red teaming, adversarial testing, and prompt injection defenses
  • Agent privilege matrix and least-privilege deployment patterns

Regulatory readiness

EU AI Act
  • EU AI Act high-risk mapping, technical documentation, and Article 11 compliance
  • GPAI obligations: transparency, copyright policy, systemic risk assessment
  • Alignment to existing ISO 27001, 27701, and SOC 2 programs
  • Post-market monitoring and serious incident reporting (Article 72)

AI risk register

Operating model
  • Structured risk identification aligned to ISO 42001 §6 and NIST AI RMF
  • Key performance indicators for AI system quality and oversight effectiveness
  • Internal assurance cycle: audit, review, and closure discipline
  • Board-ready narrative with dated risk owners and remediation timelines
Frameworks

Compliance and assurance in one view

Evidence that withstands scrutiny

Traceable
  • Policies, procedures, and standards linked to named controls and owners
  • Architectures, data flows, and segregation proofs that travel to auditors
  • Key management and rotation logs with verifiable timestamps
  • Scans, tests, and resilience exercises with full traceability chains

Cloud and SaaS assurance

Vendor risk
  • Cloud Security Alliance CAIQ domains mapped to real control owners
  • Business continuity and incident response alignment across providers
  • Vendor risk, privacy, and AI obligations integrated into one review cycle
  • Exit, return, and erase provisions tested — not just documented

ISO 14001 and ESG

Supplier proof
  • Environmental and social metrics aligned with governance discipline
  • Supplier expectations embedded in contracts and due diligence frameworks
  • Evidence for sustainability claims backed by verifiable data
Cloud

Patterns that scale across all four major platforms

Amazon Web Services Microsoft Azure Google Cloud Platform Oracle Cloud Infrastructure

Identity

Least privilege
  • Least privilege baselines with modern IAM including machine identities
  • Conditional access and time-bound elevation with full logging
  • Break-glass design and non-human identity governance

Network

Egress control
  • Egress control and deep packet inspection
  • Micro-segmentation and service identity
  • Private connectivity and routing safeguards

Data

Resilience
  • Centrally managed encryption keys
  • Field-level protection and tokenization
  • Immutable backups with verified restore

Observability

Signal
  • Risk-focused rules and alert suppression
  • Optimized collection balancing cost and signal quality
  • Automated playbooks with human oversight gates

Cost levers

  • Rightsizing, scheduling, and commitment planning
  • Storage tiering with guardrails
  • Transparent showback to business owners

Secrets and keys

  • Single source of truth for secrets across platforms
  • Rotation playbooks and access attestations
  • Integrated detection for misuse and anomalous access

Vendor risk

  • Tiered intake with clear risk-based criteria
  • Security addenda, DPAs, and BAAs with enforceable terms
  • Exit, return, and erase tested in practice — not just contracted
Executive

Board reporting without noise

A consistent view of exposure, control health, and the decisions that need to be made.

"Executive teams deserve security and AI governance reporting they can actually use — not dashboards that describe activity without clarifying risk, and not compliance summaries that pass on paper while failing in practice."

TeraType — Partner-led advisory

Risk and control health

Trends
  • Top risks with trend direction and accountable owners
  • Coverage and gap heatmaps that stay current
  • Closure timing and exception discipline with dates

Compliance at a glance

Readiness
  • ISO 27001 family, ISO 27701:2025, SOC 2, PCI DSS, HIPAA, ISO 42001
  • Evidence freshness, renewal timelines, and audit calendar
  • Change radar: EU AI Act, DORA, NIS2, SEC disclosure, state privacy laws

Cost and value

Decision
  • Security and AI governance spend aligned to risk reduction outcomes
  • Prioritization that balances control effectiveness and delivery velocity
  • Tradeoffs documented, dated, and revisited at defined intervals
Contact

Speak with TeraType

United States
+1 888 964 6699
European Union
+421 233056 377

We use your information only to respond. We do not sell personal data.

Privacy

Privacy notice

Effective date: April 1, 2026

Who we are

TeraType is a cybersecurity, privacy, and AI governance advisory firm. We help clients design, operate, and evidence governance, risk, compliance, and security programs.

Scope

This notice covers personal information we process when you visit this website or interact with us. Client data processed under contract is subject to the relevant Data Processing Addendum or Business Associate Agreement.

Information we collect

  • Contact details such as name, email, phone, and message content you submit.
  • Technical data such as IP address, device details, and basic analytics configured to minimize identifiers.
  • Business information you share about your organization, needs, or timelines.

How we use your information

  • To respond to inquiries and provide requested information.
  • To operate, secure, and improve our site and services.
  • To comply with legal obligations and protect our rights.
  • With consent to send occasional updates.

Legal bases

  • Legitimate interests for communications, security, and service improvement.
  • Consent for certain communications and cookies where required.
  • Legal obligation for recordkeeping and compliance.

Sharing

We do not sell personal information. We share limited data with service providers under confidentiality and security obligations, or as required by law.

International transfers

Where data moves across borders we use recognized mechanisms and safeguards.

Retention

We retain personal information only as long as needed for these purposes or as required by law, then delete or de-identify it.

Security

We apply administrative, technical, and organizational measures to protect personal information. No system is perfectly secure, so we encourage careful handling of credentials and vigilance for fraud.

Your rights

  • EEA and UK individuals may exercise rights of access, rectification, erasure, restriction, objection, and portability.
  • California residents may request access, deletion, and correction and may opt out of certain sharing.

Contact privacy@teratype.com to exercise rights.

Cookies

We use essential cookies. Optional analytics only run if you choose Allow on the banner.

Children

Our services target organizations, not children. Contact us to request deletion if a child has provided personal data.

Changes

We may update this notice and will adjust the effective date.

Data Processing Addendums and Business Associate Agreements

Available on request.