gRPC Security: Authentication, TLS, and Authorization

Securing a gRPC service goes far beyond slapping TLS on the wire. gRPC's layered architecture provides distinct security mechanisms at the channel level (transport security) and the call level (per-RPC authentication and authorization). Understanding where each mechanism fits — and what gaps remain when one is missing — is the difference between a service that looks secure and one that actually is. This article covers the full stack: channel credentials, call credentials, TLS and mTLS configuration, token-based authentication with JWT and OAuth2, Google's ALTS protocol, per-RPC authorization, interceptor patterns, API keys, rate limiting, audit logging, and the most common security mistakes teams make in production. If you are building on gRPC, you should also understand how gRPC works at the protocol level and how TLS secures transport.

Channel Credentials vs. Call Credentials

gRPC separates security into two distinct layers. Channel credentials secure the connection itself — they establish the encrypted transport between client and server. Call credentials attach authentication metadata to individual RPCs. This separation is fundamental to gRPC's security model because it lets you combine different mechanisms cleanly: a TLS channel for encryption with OAuth2 tokens for identity, for example.

Channel credentials are applied once when the channel is created and persist for the lifetime of that channel. They handle the TLS handshake, certificate validation, and session key negotiation. Every RPC sent over that channel inherits the transport security the channel provides.

Call credentials, by contrast, are attached per-RPC (or per-channel, applying to every RPC). They typically carry tokens, keys, or other identity assertions in the request metadata. The server extracts these from the metadata and validates them independently of the transport layer.

gRPC also supports composite credentials, which combine a channel credential with a call credential into a single object. This is the idiomatic way to say "use TLS for the connection, and attach this OAuth2 token to every call":

// Go example: composite credentials
creds := credentials.NewTLS(&tls.Config{...})
perRPC := oauth.NewOauthAccess(token)
conn, err := grpc.Dial(
    "api.example.com:443",
    grpc.WithTransportCredentials(creds),
    grpc.WithPerRPCCredentials(perRPC),
)

The key insight is that channel credentials and call credentials solve different problems. Channel credentials answer "is this connection private and authentic?" Call credentials answer "who is making this specific request?" A secure gRPC deployment needs both.

gRPC Client Call Creds (JWT / OAuth2) Channel Creds (TLS / mTLS) HTTP/2 gRPC Server Auth Interceptor (validate token) TLS Termination (cert verify) HTTP/2 metadata: authorization=Bearer ... TLS encrypted channel HTTP/2 frames (HEADERS + DATA) Channel credentials secure the pipe; call credentials identify the caller

TLS Configuration for gRPC

gRPC runs on HTTP/2, and HTTP/2 in practice requires TLS. While the HTTP/2 specification technically allows cleartext (h2c), gRPC's default behavior and all major cloud deployments use TLS. Configuring TLS correctly for gRPC means handling certificates, hostname verification, and cipher suite selection — the same concerns as any TLS deployment, but with some gRPC-specific nuances.

On the server side, you need at minimum a certificate and private key:

// Go: TLS server setup
cert, _ := tls.LoadX509KeyPair("server.crt", "server.key")
creds := credentials.NewTLS(&tls.Config{
    Certificates: []tls.Certificate{cert},
    MinVersion:   tls.VersionTLS13,
})
server := grpc.NewServer(grpc.Creds(creds))

On the client side, the TLS configuration must include the CA certificate used to verify the server, unless you are relying on system root CAs:

// Go: TLS client with custom CA
caCert, _ := os.ReadFile("ca.crt")
certPool := x509.NewCertPool()
certPool.AppendCertsFromPEM(caCert)
creds := credentials.NewTLS(&tls.Config{
    RootCAs:    certPool,
    MinVersion: tls.VersionTLS13,
})
conn, _ := grpc.Dial("api.example.com:443",
    grpc.WithTransportCredentials(creds),
)

A critical detail many teams overlook: gRPC performs hostname verification by default. The server certificate's Subject Alternative Name (SAN) must match the hostname used in the Dial call. If you connect to an IP address but the certificate only has a DNS SAN, the handshake will fail. This catches misconfiguration early, which is good — but it also means you must plan your certificate issuance carefully.

Mutual TLS (mTLS)

Standard TLS is one-sided: the client verifies the server's identity, but the server does not verify the client. Mutual TLS (mTLS) adds client certificate verification, so both sides authenticate each other during the handshake. This is the strongest form of transport-level authentication gRPC supports.

With mTLS, the server is configured to require client certificates:

// Server with mTLS
creds := credentials.NewTLS(&tls.Config{
    Certificates: []tls.Certificate{serverCert},
    ClientAuth:   tls.RequireAndVerifyClientCert,
    ClientCAs:    clientCACertPool,
    MinVersion:   tls.VersionTLS13,
})

And the client presents its own certificate:

// Client with mTLS
clientCert, _ := tls.LoadX509KeyPair("client.crt", "client.key")
creds := credentials.NewTLS(&tls.Config{
    Certificates: []tls.Certificate{clientCert},
    RootCAs:      serverCACertPool,
    MinVersion:   tls.VersionTLS13,
})

mTLS is particularly common in service-to-service communication within microservice architectures. Service meshes like Istio and Linkerd automate mTLS by injecting sidecar proxies that handle certificate provisioning, rotation, and the TLS handshake transparently. The application code never sees a certificate — it speaks plaintext to the local sidecar, which encrypts on the wire.

mTLS Handshake Client + client cert Server + server cert 1. ClientHello (TLS 1.3) 2. ServerHello + server cert + CertificateRequest 3. Client cert + CertificateVerify + Finished 4. Finished (mutual authentication complete) 5. Encrypted gRPC traffic (both sides verified) Both client and server present and verify X.509 certificates

The trade-off with mTLS is operational complexity. Every client needs a certificate, those certificates need to be provisioned and rotated, and revocation must be handled. For external-facing APIs where clients are third-party developers, mTLS is usually impractical — token-based authentication is more appropriate. For internal service-to-service traffic, mTLS provides strong mutual authentication without requiring application-layer token management.

Token-Based Authentication: JWT and OAuth2

For most gRPC APIs — especially those facing external clients — token-based authentication is the standard approach. The two most common token types are JSON Web Tokens (JWTs) and OAuth2 access tokens. Both are carried as call credentials in gRPC metadata.

JWT Authentication

A JWT is a self-contained token that encodes claims (user identity, permissions, expiration) as a signed JSON payload. The server can validate a JWT without contacting an external service — it just verifies the cryptographic signature. This makes JWTs excellent for high-throughput gRPC services where adding a network round-trip for every RPC would be unacceptable.

The typical flow:

  1. The client authenticates with an identity provider (IdP) and receives a JWT.
  2. The client attaches the JWT to gRPC metadata as a bearer token: authorization: Bearer eyJhbGci...
  3. A server-side interceptor extracts the token, verifies its signature against the IdP's public key, checks expiration and claims, and either allows or rejects the RPC.
// Python: JWT interceptor (server-side)
class JWTInterceptor(grpc.ServerInterceptor):
    def intercept_service(self, continuation, handler_call_details):
        metadata = dict(handler_call_details.invocation_metadata)
        token = metadata.get('authorization', '').replace('Bearer ', '')
        try:
            claims = jwt.decode(token, PUBLIC_KEY, algorithms=['RS256'])
            # Attach claims to context for authorization
        except jwt.InvalidTokenError:
            return _abort_handler(grpc.StatusCode.UNAUTHENTICATED)
        return continuation(handler_call_details)

The critical security properties: use RS256 or ES256 (asymmetric algorithms), never HS256 with a shared secret in a distributed system. Always validate the exp, iss, and aud claims. Reject tokens with the none algorithm — this is a well-known JWT attack vector.

OAuth2 with gRPC

OAuth2 provides a framework for delegated authorization — a client obtains an access token from an authorization server, then presents that token to the resource server (your gRPC service). The access token may be a JWT (in which case the server can validate it locally) or an opaque token (requiring introspection against the authorization server).

gRPC has built-in support for OAuth2 in several languages. In Go, the oauth package provides credential types that handle token refresh automatically:

// Go: OAuth2 token source with auto-refresh
tokenSource := oauth2.TokenSource{
    TokenSource: oauth2.ReuseTokenSource(nil, src),
}
perRPC := oauth.TokenSource{TokenSource: &tokenSource}
conn, _ := grpc.Dial(addr,
    grpc.WithTransportCredentials(tlsCreds),
    grpc.WithPerRPCCredentials(perRPC),
)

For Google Cloud services, gRPC clients can use Application Default Credentials (ADC), which automatically acquire and refresh OAuth2 tokens from the environment — whether running on GCE, GKE, or with a service account key file. This is the most common pattern for gRPC services running in Google Cloud.

OAuth2 + gRPC Authentication Flow gRPC Client Auth Server (IdP / OAuth2) gRPC Server 1. Token request (client_credentials grant) 2. Access token (JWT) 3. gRPC call + Bearer token in metadata Verify JWT sig 4. gRPC response (authorized) Token obtained once, reused across RPCs until expiration

Google's ALTS (Application Layer Transport Security)

ALTS is Google's proprietary transport security protocol, designed specifically for service-to-service communication within Google's infrastructure. It is an alternative to TLS that is optimized for the data center environment where both endpoints are running on Google-managed machines.

Unlike TLS, which uses X.509 certificates issued by certificate authorities, ALTS uses identity certificates tied to the workload identity (the service account or job running on the machine). The key differences from TLS:

For gRPC services running on Google Cloud (GCE, GKE, Cloud Run), ALTS is available as a drop-in replacement for TLS:

// Go: ALTS credentials (Google Cloud)
import "google.golang.org/grpc/credentials/alts"

altsTC := alts.NewClientCreds(alts.DefaultClientOptions())
conn, _ := grpc.Dial(addr, grpc.WithTransportCredentials(altsTC))

// Server side
altsTC := alts.NewServerCreds(alts.DefaultServerOptions())
server := grpc.NewServer(grpc.Creds(altsTC))

After the ALTS handshake, the server can extract the peer's service account identity from the context and use it for authorization. This eliminates the need for separate token-based authentication for internal services — the transport layer provides identity directly.

ALTS is only available within Google's infrastructure. Outside Google Cloud, the equivalent functionality is typically achieved through mTLS with a service mesh or SPIFFE/SPIRE for workload identity.

Per-RPC Authorization

Authentication tells you who the caller is. Authorization tells you what they are allowed to do. In gRPC, authorization is typically implemented in interceptors that run after authentication and before the RPC handler.

A robust authorization model for gRPC considers:

// Go: per-RPC authorization interceptor
func authzInterceptor(
    ctx context.Context,
    req interface{},
    info *grpc.UnaryServerInfo,
    handler grpc.UnaryHandler,
) (interface{}, error) {
    claims := extractClaims(ctx) // from auth interceptor
    if !isAuthorized(claims, info.FullMethod, req) {
        return nil, status.Errorf(
            codes.PermissionDenied,
            "user %s not authorized for %s", claims.Subject, info.FullMethod,
        )
    }
    return handler(ctx, req)
}

For complex authorization policies, consider using a policy engine like Open Policy Agent (OPA) or Google's CEL (Common Expression Language). These let you express authorization rules declaratively rather than hardcoding them in interceptors:

# OPA policy for gRPC authorization (Rego)
package grpc.authz

default allow = false

allow {
    input.method == "/mypackage.MyService/GetResource"
    input.claims.role == "admin"
}

allow {
    input.method == "/mypackage.MyService/GetResource"
    input.claims.role == "viewer"
    input.request.owner == input.claims.subject
}

The critical principle: always use codes.PermissionDenied for authorization failures and codes.Unauthenticated for authentication failures. Mixing these up leaks information — a PermissionDenied response confirms the resource exists, while Unauthenticated simply says "provide credentials."

Interceptor-Based Auth Patterns

Interceptors (called middleware in some frameworks) are the primary mechanism for implementing cross-cutting security concerns in gRPC. They sit in the RPC processing pipeline, executing before and after the handler. gRPC defines both unary interceptors (for request-response RPCs) and stream interceptors (for streaming RPCs).

A well-structured interceptor chain for security typically looks like this:

gRPC Interceptor Chain (Server-Side) Rate Limiter reject excess Auth'n verify token Auth'z check policy Audit Log record access RPC Handler business logic RESOURCE_ EXHAUSTED UNAUTHEN- TICATED PERMISSION_ DENIED Each interceptor can short-circuit the chain with a specific error code

The order matters. Rate limiting should come first — you do not want to spend CPU verifying JWTs for requests you are going to reject anyway. Authentication comes next, then authorization, then audit logging (which needs the authenticated identity to log meaningfully), and finally the handler.

Server-side interceptor registration in Go chains them in order:

server := grpc.NewServer(
    grpc.ChainUnaryInterceptor(
        rateLimitInterceptor,
        authenticationInterceptor,
        authorizationInterceptor,
        auditLogInterceptor,
    ),
    grpc.ChainStreamInterceptor(
        streamRateLimitInterceptor,
        streamAuthenticationInterceptor,
        streamAuthorizationInterceptor,
        streamAuditLogInterceptor,
    ),
)

A common mistake is implementing only unary interceptors and forgetting stream interceptors. Streaming RPCs bypass unary interceptors entirely — if your auth logic only runs in a unary interceptor, streaming endpoints are wide open. Always implement both.

gRPC and API Keys

API keys are the simplest form of call credential — a static string that identifies the calling application (not the user). API keys are appropriate for public APIs where you need to track usage and enforce rate limits, but not for strong authentication.

In gRPC, API keys are sent as metadata:

// Client: attach API key
type apiKeyCredential struct {
    key string
}

func (a apiKeyCredential) GetRequestMetadata(ctx context.Context, uri ...string) (map[string]string, error) {
    return map[string]string{"x-api-key": a.key}, nil
}

func (a apiKeyCredential) RequireTransportSecurity() bool {
    return true // Always require TLS
}

conn, _ := grpc.Dial(addr,
    grpc.WithTransportCredentials(tlsCreds),
    grpc.WithPerRPCCredentials(apiKeyCredential{key: "dk_live_..."}),
)

Notice RequireTransportSecurity() returns true. This is critical — API keys sent over an unencrypted channel can be intercepted. gRPC enforces this check: if the credential requires transport security and the channel is not TLS, the RPC will fail. This prevents accidental deployment of insecure configurations.

API keys should never be the sole authentication mechanism for operations that modify data or access sensitive resources. They are best used alongside stronger authentication — the API key identifies the application, while a JWT or OAuth2 token identifies the user.

Rate Limiting

Rate limiting in gRPC can be implemented at multiple levels: in the interceptor chain, at the load balancer, or in a sidecar proxy. Interceptor-based rate limiting is the most common for application-level control.

Effective rate limiting for gRPC considers:

// Go: token bucket rate limiter interceptor
func rateLimitInterceptor(
    ctx context.Context,
    req interface{},
    info *grpc.UnaryServerInfo,
    handler grpc.UnaryHandler,
) (interface{}, error) {
    clientID := extractClientID(ctx)
    limiter := getLimiter(clientID, info.FullMethod)
    if !limiter.Allow() {
        return nil, status.Errorf(
            codes.ResourceExhausted,
            "rate limit exceeded for %s", info.FullMethod,
        )
    }
    return handler(ctx, req)
}

Use codes.ResourceExhausted for rate limiting — this is the gRPC equivalent of HTTP 429. Well-behaved clients will back off when they receive this status. You can also include retry timing hints using the retry-after metadata key, though support for this varies across gRPC client libraries.

For distributed rate limiting across multiple server instances, use a shared backend like Redis with a sliding window or token bucket algorithm. Each server instance checks the shared counter before allowing a request through.

Audit Logging

Audit logging captures who did what, when, and whether it succeeded. For gRPC services handling sensitive data, audit logging is not optional — it is a compliance requirement for SOC 2, HIPAA, PCI DSS, and most other security frameworks.

An audit log interceptor should capture:

// Go: audit log interceptor
func auditInterceptor(
    ctx context.Context,
    req interface{},
    info *grpc.UnaryServerInfo,
    handler grpc.UnaryHandler,
) (interface{}, error) {
    start := time.Now()
    claims := extractClaims(ctx)
    peer, _ := peer.FromContext(ctx)

    resp, err := handler(ctx, req)

    code := status.Code(err)
    auditLog.Write(AuditEntry{
        Timestamp: start,
        Caller:    claims.Subject,
        Method:    info.FullMethod,
        PeerAddr:  peer.Addr.String(),
        Status:    code.String(),
        Duration:  time.Since(start),
    })
    return resp, err
}

Critical principle: the audit log interceptor should never fail the RPC. If the audit system is down, the RPC should still proceed (and the failure to log should be recorded through a separate alerting channel). Never log the full request or response payload in the audit log — this can leak sensitive data and create massive storage costs. Log only the metadata needed for forensics.

Common Security Pitfalls

These are the mistakes that show up repeatedly in gRPC security audits and incident reports. Each one has caused real production incidents.

1. Reflection Enabled in Production

gRPC server reflection is a protocol that allows clients to discover available services and their method signatures at runtime. It is invaluable during development — tools like grpcurl and grpcui depend on it. But leaving reflection enabled in production exposes your entire API surface to anyone who can reach the server.

// DO NOT do this in production
import "google.golang.org/grpc/reflection"
reflection.Register(server) // exposes all service definitions

An attacker with reflection access can enumerate every service, every method, and every message type on your server. This is the gRPC equivalent of leaving Swagger UI publicly accessible with no authentication. Disable reflection in production, or gate it behind authentication that only internal tooling can pass.

2. Insecure Channels in Production

Using grpc.WithInsecure() (now grpc.WithTransportCredentials(insecure.NewCredentials())) disables TLS entirely. Traffic flows in plaintext, and any network observer — on the same WiFi, at the ISP, or anywhere along the path — can read and modify every RPC.

// NEVER in production
conn, _ := grpc.Dial(addr, grpc.WithInsecure()) // plaintext, no auth

This seems obvious, but it happens more often than you would expect. A developer uses WithInsecure() for local testing, the code gets committed, and it ends up in production behind a load balancer that terminates TLS — which means the last hop (LB to application) is plaintext. If the load balancer is not on the same machine, that plaintext hop crosses a network.

Even for internal services, use TLS or mTLS. The "trusted internal network" assumption has been invalidated by every major breach of the last decade. Zero-trust networking means encrypting everything, even east-west traffic.

3. Missing Deadline Enforcement

gRPC deadlines (timeouts) are a security mechanism, not just a reliability feature. Without deadlines, a slow or malicious client can hold server resources indefinitely — connections, goroutines, memory — eventually exhausting the server.

// Client: always set deadlines
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
resp, err := client.GetResource(ctx, req)

// Server: reject requests without deadlines
func (s *server) GetResource(ctx context.Context, req *pb.Request) (*pb.Response, error) {
    _, ok := ctx.Deadline()
    if !ok {
        return nil, status.Error(codes.InvalidArgument,
            "deadline required")
    }
    // ... handle request
}

On the server side, consider rejecting RPCs that arrive without a deadline, or enforcing a maximum deadline. A client that sets a 24-hour deadline is effectively the same as no deadline for resource exhaustion purposes.

4. Missing Stream Limits

gRPC supports multiplexing many RPCs over a single HTTP/2 connection. Without limits, a client can open thousands of concurrent streams on a single connection, consuming server memory and goroutines. Set MaxConcurrentStreams on the server:

server := grpc.NewServer(
    grpc.MaxConcurrentStreams(100),
    grpc.KeepaliveEnforcementPolicy(keepalive.EnforcementPolicy{
        MinTime:             10 * time.Second,
        PermitWithoutStream: false,
    }),
)

5. Overly Broad Error Messages

gRPC status messages are returned to the client. Including internal details — stack traces, database errors, file paths, internal IP addresses — in error messages leaks information to attackers. Return generic messages to the client and log the detailed error server-side:

// Bad: leaks internal details
return nil, status.Errorf(codes.Internal,
    "query failed: pq: relation 'users' does not exist at 10.0.3.42:5432")

// Good: generic message, detailed server-side log
log.Errorf("query failed: %v (peer: %s)", err, peerAddr)
return nil, status.Error(codes.Internal, "internal error")

6. No Input Validation

Protobuf deserialization is not validation. A message can be perfectly valid protobuf but contain values that are logically invalid — negative IDs, empty required fields, strings that exceed expected lengths, or values that trigger expensive operations. Always validate request fields before processing:

func (s *server) GetUser(ctx context.Context, req *pb.GetUserRequest) (*pb.User, error) {
    if req.UserId == "" {
        return nil, status.Error(codes.InvalidArgument, "user_id required")
    }
    if len(req.UserId) > 128 {
        return nil, status.Error(codes.InvalidArgument, "user_id too long")
    }
    // ...
}

Consider using a validation library like protoc-gen-validate (PGV) or buf validate to generate validation code from protobuf annotations, rather than writing validation logic by hand for every message.

7. Ignoring Metadata Size Limits

gRPC metadata (headers) can carry arbitrary key-value pairs. Without size limits, a client can send megabytes of metadata, consuming server memory before the request even reaches your handler. Set MaxHeaderListSize to enforce a reasonable limit:

server := grpc.NewServer(
    grpc.MaxRecvMsgSize(4 * 1024 * 1024),    // 4MB max message
    grpc.MaxHeaderListSize(8 * 1024),          // 8KB max metadata
)
gRPC Security Checklist DO + Use TLS 1.3 (or mTLS) + Set deadlines on every RPC + Validate all input fields + Use asymmetric JWT algs (RS256) + Implement both unary+stream auth + Limit MaxConcurrentStreams + Limit metadata/message sizes + Return generic error messages + Log audit trail for writes + Rate limit before auth (save CPU) + RequireTransportSecurity()=true + Rotate certs/keys regularly DO NOT - Enable reflection in prod - Use WithInsecure() in prod - Skip deadlines/timeouts - Use HS256 shared secrets - Auth only unary, skip streams - Allow unlimited streams - Accept unbounded metadata - Expose stack traces in errors - Log full request payloads - Auth after expensive processing - Trust "internal network" alone - Hardcode API keys in source

Defense in Depth: Layering Security

No single mechanism is sufficient. A production gRPC deployment should layer multiple defenses:

  1. Network level — Firewall rules, VPC isolation, private endpoints. Your gRPC server should not be reachable from the public internet unless it is explicitly a public API. Use network policies (Kubernetes NetworkPolicy, cloud security groups) to restrict which services can reach which endpoints.
  2. Transport level — TLS 1.3 at minimum. mTLS for service-to-service traffic. ALTS if running on Google Cloud. Never allow downgrade to plaintext.
  3. Authentication level — JWT or OAuth2 tokens for user-facing APIs. mTLS client certificates or ALTS identities for service-to-service. API keys for application identification (not sole authentication).
  4. Authorization level — Per-RPC access control based on the caller's identity and the resource being accessed. Use a policy engine for complex rules. Deny by default — explicitly allow rather than explicitly deny.
  5. Application level — Input validation, rate limiting, resource quotas, deadline enforcement. These protect against abuse even from authenticated, authorized clients.

Each layer catches threats that slip through the others. Network controls might be misconfigured. Tokens might be stolen. Authorization policies might have gaps. Rate limits catch runaway automation. Deadlines prevent resource exhaustion. When one layer fails, the others hold.

gRPC Security in Service Meshes

Service meshes like Istio, Linkerd, and Consul Connect shift much of the security burden from application code to infrastructure. The mesh sidecar proxy handles mTLS, certificate rotation, and even authorization policies — the application only needs to speak plaintext gRPC to localhost.

This has significant advantages: developers do not need to understand TLS certificate management, the mesh enforces consistent security policies across all services, and certificate rotation happens automatically without application restarts. The trade-off is operational complexity in running the mesh itself, and latency overhead from the extra proxy hop.

However, a service mesh does not eliminate the need for application-level security. The mesh handles transport security (mTLS) and can enforce coarse-grained authorization (which service can call which service), but it cannot enforce fine-grained authorization that depends on the request payload or the user's identity within a JWT. Application-level interceptors remain necessary for these concerns.

Putting It Together

A complete gRPC server setup combining TLS, authentication, authorization, rate limiting, and audit logging:

func main() {
    // TLS configuration
    cert, _ := tls.LoadX509KeyPair("server.crt", "server.key")
    caCert, _ := os.ReadFile("ca.crt")
    caPool := x509.NewCertPool()
    caPool.AppendCertsFromPEM(caCert)

    tlsConfig := &tls.Config{
        Certificates: []tls.Certificate{cert},
        ClientAuth:   tls.RequireAndVerifyClientCert,
        ClientCAs:    caPool,
        MinVersion:   tls.VersionTLS13,
    }

    server := grpc.NewServer(
        grpc.Creds(credentials.NewTLS(tlsConfig)),
        grpc.MaxConcurrentStreams(100),
        grpc.MaxRecvMsgSize(4 << 20),
        grpc.MaxHeaderListSize(8192),
        grpc.KeepaliveEnforcementPolicy(keepalive.EnforcementPolicy{
            MinTime:             10 * time.Second,
            PermitWithoutStream: false,
        }),
        grpc.ChainUnaryInterceptor(
            rateLimitInterceptor,
            jwtAuthInterceptor,
            rbacAuthzInterceptor,
            auditLogInterceptor,
        ),
        grpc.ChainStreamInterceptor(
            streamRateLimitInterceptor,
            streamJwtAuthInterceptor,
            streamRbacAuthzInterceptor,
            streamAuditLogInterceptor,
        ),
    )

    pb.RegisterMyServiceServer(server, &myServiceImpl{})
    // Note: NO reflection.Register() in production

    lis, _ := net.Listen("tcp", ":443")
    server.Serve(lis)
}

This configuration provides: encrypted and mutually authenticated transport (mTLS), per-RPC authentication (JWT), role-based authorization, rate limiting, audit logging, resource limits (streams, message size, metadata size, keepalive), and no reflection endpoint. Each layer is independent and can be tested in isolation.

gRPC gives you the building blocks. Channel credentials, call credentials, interceptors, status codes, and metadata form a composable security model. The key is using all of them together — not just the easy ones — and testing the failure modes: What happens when a token is expired? When a certificate is revoked? When a client sends no deadline? When metadata exceeds the size limit? The answers to these questions determine whether your gRPC service is secure or merely appears to be.

For a deeper understanding of the underlying protocols, see how gRPC works, how TLS secures connections, how OAuth2 handles authorization, and how JWTs encode identity claims.

See BGP routing data in real time

Open Looking Glass
More Articles
How gRPC Works
How Protocol Buffers Work
How gRPC-Web Works
gRPC Load Balancing: Strategies and Patterns
gRPC and Service Mesh: Istio, Envoy, and Linkerd
gRPC Reflection, Testing, and Debugging