Partner-led cybersecurity & AI governance

Disciplined security.
Trusted AI.
Confident oversight.

TeraType works with executive teams that want clarity, strong controls, and credible evidence. We design governance, risk, and assurance programs that hold up under audit, due diligence, and regulatory review.

Read the briefings Offerings Contact

Quietly precise work, built to hold up under scrutiny.

First-pass evidence
Controls mapped to owners and artifacts auditors accept
Integration discipline
Tight scopes, short-lived credentials, monitored tool paths
Executive cadence
A steady rhythm for decisions, exceptions, and risk closure
Operational proof
Logging and telemetry that supports real investigations

Signals we track

Focus areas appearing repeatedly in due diligence, incidents, and audit findings

February 2026
Agentic AI tool exposure
Rising ↑
Non-human identity sprawl
Critical ↑
Integration token risk
High
EU AI Act readiness gap
High ↑
Evidence freshness risk
Moderate
Supply chain AI poisoning
Emerging
5
Frameworks covered in readiness assessments
Aug '26
EU AI Act high-risk enforcement — 5 months
47d
Incoming public TLS cert maximum lifetime
4+4
Cloud platforms: AWS, Azure, GCP, OCI
Executive Intelligence

Briefings for Boards and Leadership

Short, usable notes for leadership conversations. February 2026 edition.

AI Security Regulation Supply chain Hygiene
AI Security — New February 2026
Agent-to-agent identity risk: implicit trust becomes an attack surface

As enterprises deploy multi-agent systems, attackers are exploiting trust relationships between agents — using session smuggling, impersonation, and unauthorized capability escalation to cross security boundaries without human involvement.

Why it matters: A compromised research agent can inject hidden instructions into output consumed by a financial or workflow agent, executing unintended actions with full system authority. IBM X-Force 2026 identifies non-human identity compromise as a primary vector.
Action: Establish explicit authentication between agents. Do not rely on implicit trust within a pipeline — every agent invocation should carry a verifiable identity.
Action: Implement data loss prevention at the agent boundary. Control what information each agent can expose to downstream systems or external APIs.
Action: Audit agent-to-agent communication paths. Map which agents hold credentials for other agents. A compromised orchestrator with downstream keys gives lateral movement equivalent to a domain admin account.
For boards: Ask which AI pipelines include agent-to-agent calls, what identity model governs them, and whether there is a centralized log of inter-agent actions. Ask who owns the answer when an agent chain causes a data incident.
Regulation — Updated February 2026
EU AI Act: five months to high-risk enforcement — documentation is the test

August 2, 2026 brings full enforcement power for high-risk AI systems and Commission authority over GPAI models. Teams underestimate how hard it is to retrofit compliant technical documentation and post-market monitoring after launch.

Why it matters: The AI Office gains investigatory powers on August 2, 2026: document requests, model recalls, mandated mitigations, and fines up to €35M or 7% of global turnover. High-risk systems in hiring, credit, biometrics, healthcare, and law enforcement face the highest scrutiny.
Action: Complete your AI system inventory and classify each by risk tier. Systems deployed before the deadline that lack documentation are immediately exposed to enforcement action.
Action: Implement an intake gate with test criteria, rollback rules, and change control for models and prompts. Document every deployment decision with dated approvals.
Action: Establish post-market monitoring signals and an incident path that connects to security operations. Serious incident reporting is mandatory under Article 55.
For boards: Ask which AI systems influence hiring, access, pricing, eligibility, or decisions about individuals. Ask whether each has named owners, test records, monitoring dashboards, and a documented incident path.
Identity — New February 2026
Non-human identities are the primary cloud breach vector in 2026

Machine identities now outnumber human users by orders of magnitude. Service accounts, API keys, agent tokens, and pipeline credentials are accumulating faster than they can be governed — and attackers know it.

Why it matters: IBM X-Force 2026 identifies NHI compromise as the fastest-growing attack vector. Over-privileged machine identities enable silent lateral movement that evades human-identity controls entirely. Developers hardcode API keys; pipeline credentials drift; agents accumulate permissions that nobody reviews.
Action: Build a machine identity inventory. Map every service account, API key, agent token, and OAuth grant. Classify by privilege level and business owner — not just system owner.
Action: Apply least privilege at the machine identity level. Treat over-permissioned service accounts the same as over-permissioned human accounts: remediate on a defined schedule.
Action: Detect anomalous machine identity use. Just-in-time access, usage baselines, and automated alerts for credential misuse are essential controls, not aspirational ones.
For boards: Ask how many machine identities exist across cloud environments, what the last audit found, and who owns the remediation backlog. Ask whether AI agents have dedicated identities or share service accounts.
Regulation February 2026
NIS2: enforcement expectations and supplier accountability

NIS2 obligations increasingly appear as supervisory expectations: risk measures you can evidence, incident handling discipline, and supplier accountability that works under time pressure.

Why it matters: Incident reporting depends on detection, classification, and decision speed. Weak supplier terms can block recovery and notification when hours matter.
Action: Confirm scope and assign owners for measures that require evidence, not intent. A written register of which controls are owned, by whom, and to what standard is the start of an audit conversation, not the end of it.
Action: Test reporting flow end-to-end — including classification criteria, escalation, and decision logging. Tabletop exercises that do not include the communications decision are incomplete.
Action: Strengthen supplier evidence and notification terms. Validate that vendors can meet timelines in practice, not just in contract language.
For boards: Ask for readiness by business unit, mean time to detect and classify, and top supplier dependencies that impact reporting deadlines.
AI Security — New February 2026
AI supply chain: model files, datasets, and open-source components carry executable risk

Open-weight models and shared datasets introduce executable risk at load time. Cisco's State of AI Security 2026 identifies 43 agent framework components with embedded vulnerabilities introduced via supply chain compromise — most running unpatched.

Why it matters: Model files can contain executable code that runs during loading. Injecting 250 poisoned documents into training data is sufficient to implant a backdoor that activates under specific trigger phrases while leaving general performance unchanged.
Action: Treat model and dataset downloads as untrusted third-party dependencies. Apply the same intake controls as software libraries: source verification, hash validation, and vulnerability scanning.
Action: Maintain a model inventory with version, source, training data provenance, and last-reviewed date. Treat model updates as change-controlled events.
Action: Audit open-source agent framework versions. Many teams are running components with known vulnerabilities introduced through upstream supply chain attacks.
For boards: Ask whether AI model and dataset procurement has the same governance as software procurement. Ask what controls exist between a model download and a production deployment.
Hygiene — Updated February 2026
March 2026: TLS cert lifetimes drop to 47 days — manual hygiene fails

Public certificate authorities will enforce a 47-day maximum lifetime from March 2026. Teams without automation face a six-times increase in renewal frequency and a corresponding surge in outage risk.

Why it matters: Expiring TLS certs create public failures fast — and they are entirely avoidable. Slow patch cycles and unmanaged shadow endpoints compound the exposure.
Action: Inventory every internet-facing asset and certificate. Shadow endpoints — especially vendor-managed ones — are the highest risk. You cannot renew what you cannot see.
Action: Automate certificate issuance and renewal. ACME-based automation (Let's Encrypt, DigiCert, AWS ACM) is mature and should be deployed everywhere certificates exist today.
Action: Add monitoring for expiry, mis-issuance, and trust chain anomalies. A single expired cert on an API endpoint can break integrations silently, not just browsers visibly.
For boards: Ask for the count of unmanaged internet assets, current cert expiry exposure, and the automation plan with a dated completion milestone.
What we do

Focused offerings for complex environments

Work designed for careful review, long-term reuse, and evidence that travels.

Executive vCISO and governance

Board-ready
  • Executive risk reporting with clear narratives and metrics boards can act on
  • Policies, standards, and control frameworks aligned to how teams operate
  • ISMS and PIMS build-out, internal audit, and certification support
  • M&A diligence and integration for security, privacy, and AI programs

Assurance and compliance

Evidence
  • ISO 27001, 27017, 27018, and ISO 27701:2025 transition planning
  • SOC 2 readiness with continuous evidence and control ownership
  • PCI DSS 4.0 scope, segmentation, and Report on Compliance preparation
  • HIPAA safeguards plus Data Processing Addendums and Business Associate Agreements aligned with practice

Threat, detection, and response

Operational
  • Curated detections, tuning, and noise suppression across cloud and endpoint
  • Attack simulations, red teaming, purple teaming, incident response playbooks
  • Post-incident reviews with owners and due dates that stick
  • Exercises that include executive decision-making and communications
AI Governance

Operational AI that stands up to review

Controls for inventories, gates, testing, monitoring, and oversight. Built for August 2026.

Framework

ISO/IEC 42001
AI Management System

The international standard for AI governance. Defines scope, roles, lifecycle controls, and continuous improvement cycles for AI systems at an organizational level.

Operating model Risk register Internal audit
Inventory & classification
Every AI system named, risk-tiered, and owner-assigned before it is used in production
EU AI Act Art. 6 · ISO 42001 §8
Intake gates
Test criteria, rollback rules, and change control for models, prompts, and configurations
ISO 42001 §8.4 · NIST AI RMF MAP
Post-market monitoring
Defined signals, drift detection, and an incident path aligned to security operations
EU AI Act Art. 55 · ISO 42001 §9
Human oversight
Oversight mechanisms that go beyond nominal — including automation-bias controls
EU AI Act Art. 14 · ISO 42001 §6
Transparency records
Documentation and logs that reviewers — and regulators — will actually accept
EU AI Act Art. 11–13 · GPAI obligations
Third-party AI due diligence
Procurement controls for procured models, datasets, and AI-enabled SaaS tools
ISO 42001 §8.6 · NIS2 supply chain

Secure models and agents

Hardening
  • Data minimization, prompt sanitization, and output validation
  • Guardrails, allowlists, and policy-aligned configurations
  • Red teaming, adversarial testing, and misuse monitoring
  • Agent privilege matrix and least-privilege deployment patterns

Regulatory readiness

EU AI Act
  • EU AI Act high-risk mapping, documentation, and compliance by August 2026
  • GPAI obligations: transparency, copyright policy, systemic risk assessment
  • Alignment to existing ISO 27001, 27701, and SOC 2 programs
  • Post-market monitoring and incident handling for AI systems

AI risk register

Operating model
  • Structured risk identification aligned to ISO 42001 §6 and NIST AI RMF
  • Key performance indicators for AI system quality and oversight effectiveness
  • Internal assurance cycle: audit, review, and closure discipline
  • Board-ready narrative with dated risk owners and remediation timelines
Frameworks

Compliance and assurance in one view

Evidence that withstands scrutiny

Traceable
  • Policies, procedures, and standards linked to named controls and owners
  • Architectures, data flows, and segregation proofs that travel to auditors
  • Key management and rotation logs with verifiable timestamps
  • Scans, tests, and resilience exercises with full traceability chains

Cloud and SaaS assurance

Vendor risk
  • Cloud Security Alliance CAIQ domains mapped to real control owners
  • Business continuity and incident response alignment across providers
  • Vendor risk, privacy, and AI obligations integrated into one review cycle
  • Exit, return, and erase provisions tested — not just documented

ISO 14001 and ESG

Supplier proof
  • Environmental and social metrics aligned with governance discipline
  • Supplier expectations embedded in contracts and due diligence frameworks
  • Evidence for sustainability claims backed by verifiable data
Cloud

Patterns that scale across all four major platforms

Amazon Web Services Microsoft Azure Google Cloud Platform Oracle Cloud Infrastructure

Identity

Least privilege
  • Least privilege baselines with modern IAM including machine identities
  • Conditional access and time-bound elevation with full logging
  • Break-glass design and non-human identity governance

Network

Egress control
  • Egress control and deep packet inspection
  • Micro-segmentation and service identity
  • Private connectivity and routing safeguards

Data

Resilience
  • Centrally managed encryption keys
  • Field-level protection and tokenization
  • Immutable backups with verified restore

Observability

Signal
  • Risk-focused rules and alert suppression
  • Optimized collection balancing cost and signal quality
  • Automated playbooks with human oversight gates

Cost levers

  • Rightsizing, scheduling, and commitment planning
  • Storage tiering with guardrails
  • Transparent showback to business owners

Secrets and keys

  • Single source of truth for secrets across platforms
  • Rotation playbooks and access attestations
  • Integrated detection for misuse and anomalous access

Vendor risk

  • Tiered intake with clear risk-based criteria
  • Security addenda, DPAs, and BAAs with enforceable terms
  • Exit, return, and erase tested in practice — not just contracted
Executive

Board reporting without noise

A consistent view of exposure, control health, and the decisions that need to be made.

"Executive teams deserve security and AI governance reporting they can actually use — not dashboards that describe activity without clarifying risk, and not compliance summaries that pass on paper while failing in practice."

TeraType — Partner-led advisory

Risk and control health

Trends
  • Top risks with trend direction and accountable owners
  • Coverage and gap heatmaps that stay current
  • Closure timing and exception discipline with dates

Compliance at a glance

Readiness
  • ISO 27001 family, ISO 27701:2025, SOC 2, PCI DSS, HIPAA, ISO 42001
  • Evidence freshness, renewal timelines, and audit calendar
  • Change radar: EU AI Act, DORA, NIS2, state privacy law updates

Cost and value

Decision
  • Security and AI governance spend aligned to risk reduction outcomes
  • Prioritization that balances control effectiveness and delivery velocity
  • Tradeoffs documented, dated, and revisited at defined intervals
Contact

Speak with TeraType

United States
+1 888 964 6699
European Union
+421 233056 377

We use your information only to respond. We do not sell personal data.

Privacy

Privacy notice

Effective date: February 1, 2026

Who we are

TeraType is a cybersecurity, privacy, and AI governance advisory firm. We help clients design, operate, and evidence governance, risk, compliance, and security programs.

Scope

This notice covers personal information we process when you visit this website or interact with us. Client data processed under contract is subject to the relevant Data Processing Addendum or Business Associate Agreement.

Information we collect

  • Contact details such as name, email, phone, and message content you submit.
  • Technical data such as IP address, device details, and basic analytics configured to minimize identifiers.
  • Business information you share about your organization, needs, or timelines.

How we use your information

  • To respond to inquiries and provide requested information.
  • To operate, secure, and improve our site and services.
  • To comply with legal obligations and protect our rights.
  • With consent to send occasional updates.

Legal bases

  • Legitimate interests for communications, security, and service improvement.
  • Consent for certain communications and cookies where required.
  • Legal obligation for recordkeeping and compliance.

Sharing

We do not sell personal information. We share limited data with service providers under confidentiality and security obligations, or as required by law.

International transfers

Where data moves across borders we use recognized mechanisms and safeguards.

Retention

We retain personal information only as long as needed for these purposes or as required by law, then delete or de-identify it.

Security

We apply administrative, technical, and organizational measures to protect personal information. No system is perfectly secure, so we encourage careful handling of credentials and vigilance for fraud.

Your rights

  • EEA and UK individuals may exercise rights of access, rectification, erasure, restriction, objection, and portability.
  • California residents may request access, deletion, and correction and may opt out of certain sharing.

Contact privacy@teratype.com to exercise rights.

Cookies

We use essential cookies. Optional analytics only run if you choose Allow on the banner.

Children

Our services target organizations, not children. Contact us to request deletion if a child has provided personal data.

Changes

We may update this notice and will adjust the effective date.

Data Processing Addendums and Business Associate Agreements

Available on request.