👁4views
Java 26: What Actually Matters and Why You Should Care

CloudScale SEO — AI Article Summary
What it isJava 26 introduces ten JDK Enhancement Proposals focused on performance improvements, including ahead-of-time object caching that now works with any garbage collector, lazy constants for deferred initialization, and HTTP/3 support for the Java HTTP client.
Why it mattersThese changes address Java's historical weaknesses in startup performance and cloud-native deployments, while laying groundwork for future improvements—particularly important for AI workloads and modern distributed systems that rely heavily on HTTP communication.
Key takeawayJava 26 prioritizes foundational performance improvements over flashy features, making Java more competitive in cloud and container environments.

Java 26 landed on 17 March 2026, right on schedule as Oracle’s relentless six month cadence demands. As the first non-LTS release since Java 25, it ships ten JDK Enhancement Proposals, or JEPs. A JEP is the formal unit of change in the Java platform: a numbered design document authored by engineers from Oracle, the broader OpenJDK community, or other contributors, describing a specific improvement to the JDK along with its motivation, design decisions, and risks. Think of it as the equivalent of a Request for Comments in internet standards or a Python Enhancement Proposal in that ecosystem. Each JEP is reviewed publicly, refined through community feedback, and eventually either targeted to a specific release or deferred. Some features ship as previews, meaning the implementation is complete and available to use but the design may still change based on developer feedback before it is finalised in a future version. Others ship as incubating APIs, a status reserved for entirely new modules that need broader real-world usage before their design is locked in. Five of the ten JEPs in Java 26 are still in preview or incubation, which tells you something important about the philosophy behind this release. Java 26 is not trying to be flashy. It is laying groundwork, hardening foundations, and preparing the platform for what comes next, most likely the first real fruits of Project Valhalla in Java 27.

That said, there is genuine substance here, particularly if you care about performance, AI workloads, or the long overdue maturation of Java’s concurrency and pattern matching stories. Let me take you through what matters most.

1. Ahead-of-Time Object Caching Now Works With Any Garbage Collector

To understand JEP 516, you need to understand two concepts. The first is the JVM warmup problem. When a Java application starts, the HotSpot virtual machine begins interpreting bytecode slowly and then progressively compiles hot code paths into native machine code. This means Java applications are deliberately slow for the first seconds or minutes of their life, only reaching full performance once the JIT compiler, or Just-In-Time compiler, has had time to observe real execution patterns and optimise accordingly. The second concept is ahead-of-time caching, which attacks this problem by allowing the JVM to serialise the state of the heap, compiled code, and class metadata to disk before the application runs. On the next startup, the JVM restores this pre-built state rather than reconstructing everything from scratch.

JEP 516 is significant because that AOT cache was previously constrained to specific garbage collectors, limiting its usefulness in production environments where teams have already committed to a particular GC based on their latency and throughput requirements. Java 26 removes that constraint entirely. You can now use AOT object caching with any GC, including the collectors you actually want to use in production.

This matters enormously for cloud native workloads. Startup latency has historically been Java’s Achilles heel compared to runtimes like Go or even Python in certain scenarios. Combine AOT caching with the ongoing work from Project Leyden, the broader OpenJDK project focused on making Java start faster and use less memory, and you start to see a credible path toward Java startup times that no longer embarrass the language in container environments. Oracle has also explicitly positioned this as beneficial for AI workloads, where framework initialisation overhead can be substantial.

2. Lazy Constants: A Smarter Way to Think About Immutability

In Java, a field declared final is a constant: it is set once during object construction and never changed again. The JVM exploits this promise aggressively, applying an optimisation called constant folding where expressions involving constants are evaluated at compile time rather than at runtime. The catch is that a traditional final field must be initialised at class load time, which can mean paying an expensive setup cost even if that value is never actually used in a given execution.

JEP 536 brings a second preview of lazy constants, a feature that was called stable values in Java 25. The rename is deliberate and the scope has been refined. A lazy constant is a java.lang.LazyConstant object that holds a single immutable value, initialised exactly once on first access rather than at class load time, and never changed again. Critically, the JVM still treats it as a true constant for optimisation purposes, applying the same constant folding it would apply to a field declared final.

This gives you the performance profile of a static final with the initialisation flexibility of a lazy pattern. For large frameworks, AI inference pipelines, or anything with heavyweight initialisation that may not always be exercised in a given request path, this is a meaningful improvement in both startup performance and memory efficiency. The lazy constant model also supports efficient data sharing across threads without the synchronisation overhead you would normally associate with deferred initialisation patterns like double-checked locking.

3. HTTP/3 for the Java HTTP Client

HTTP is the protocol that underpins virtually all modern service communication. HTTP/1.1 and HTTP/2 are built on TCP, a transport protocol that provides reliability through ordered delivery but suffers from a problem called head-of-line blocking: if one packet in a stream is lost, all subsequent data in that stream is held up waiting for the retransmission, even if the later packets have already arrived. HTTP/3 replaces TCP with QUIC, a transport protocol built on UDP that handles each logical stream independently. A dropped packet affects only its own stream, not everything else in flight simultaneously.

JEP 530 adds HTTP/3 support to the Java HTTP Client API that was standardised back in Java 11. Beyond eliminating head-of-line blocking, QUIC also provides faster connection establishment through zero round-trip time resumption, meaning a client reconnecting to a server it has spoken to before can send application data in the very first packet rather than completing a full handshake first.

For any Java service that acts as an HTTP client, particularly microservices calling downstream APIs or banking systems integrating with third party providers, HTTP/3 opens up meaningful latency improvements in high throughput scenarios and on unreliable network paths. The implementation sits behind the existing HttpClient API, so migration is largely transparent once your infrastructure supports QUIC. This one has been a long time coming and its arrival in Java 26 as a finalised feature rather than a preview is welcome.

4. Primitive Types in Patterns: Fourth Preview and Closing In

Pattern matching is a way of testing a value against a shape and extracting parts of it in a single expression. Rather than writing a chain of if statements that check a type, cast it, and then access its fields, pattern matching lets you express the type check, cast, and binding in one step. Java has been extending its pattern matching capabilities progressively since Java 16, starting with instanceof patterns and expanding through switch expressions. One limitation has persisted throughout: patterns worked with reference types, the objects that live on the heap, but not with primitive types like int, long, float, and double, the raw numeric values that live directly on the stack.

This distinction matters for performance because of boxing. Boxing is the automatic wrapping of a primitive value in a heap-allocated object, converting an int into an Integer for example, to make it compatible with APIs that expect reference types. Boxing has a real cost: it allocates memory, generates garbage, and puts pressure on the garbage collector. In numeric-heavy workloads this overhead compounds invisibly across millions of operations.

JEP 547 continues the progression of primitive type support in pattern matching, now in its fourth preview after appearing in Java 23, 24, and 25. When you can pattern match against primitives without boxing, you eliminate an entire category of silent performance degradation. AI and data processing workloads that operate over large arrays of numeric data benefit directly. The feature also removes a class of unsafe casts that developers have historically worked around with verbose conditional logic. Four previews in, the semantics appear stable and finalisation in Java 27 looks likely.

5. G1 Garbage Collector Gets Faster Synchronisation

The garbage collector is the component of the JVM responsible for automatically reclaiming memory that is no longer in use. Java developers do not free memory manually, the GC does it for them, but this comes at a cost: the collector must occasionally pause application threads or compete with them for access to shared data structures. G1, which stands for Garbage First, is the default garbage collector for most production Java workloads. It is designed to balance throughput and latency by dividing the heap into regions and collecting the regions most likely to contain garbage first, hence the name.

JEP 529 targets something unsexy but impactful: reducing lock contention in G1 by improving how it handles Java object monitor operations. Object monitors are the mechanism behind Java’s synchronized keyword, the primitive used to protect shared state in concurrent code from being corrupted by simultaneous access from multiple threads. When many threads compete for monitors simultaneously, the GC’s internal handling of those monitors can itself become a bottleneck, adding latency that has nothing to do with application logic.

The work here focuses on making the GC less intrusive during the periods when your application is doing real work. Reduced synchronisation in the collector means better throughput in applications with high thread counts, which describes most modern microservice workloads. Combined with a separate improvement in Java 26 that makes G1 more aggressively reclaim humongous objects, meaning very large objects that span more than half a heap region, without requiring a full heap marking cycle, the GC story continues to improve meaningfully with each release.

6. PEM Encodings of Cryptographic Objects

PEM, which stands for Privacy-Enhanced Mail, is the most common format for storing and exchanging cryptographic objects such as TLS certificates, private keys, and certificate chains. Despite the email-era name, PEM is how virtually all TLS certificates are distributed today. A PEM file is a Base64-encoded block of binary cryptographic data wrapped in human-readable header and footer lines like -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----. If you have ever configured HTTPS on a web server, read a certificate export from AWS Certificate Manager, or debugged a mutual TLS handshake between services, you have worked with PEM files.

Java has historically provided poor native support for reading and writing PEM data. The platform’s cryptographic APIs, built around the KeyStore and CertificateFactory classes, predate the universal adoption of PEM and were designed around different abstractions. In practice, most Java developers either write their own PEM parsing utilities, pull in the third-party Bouncy Castle library, or navigate deeply convoluted conversion chains to get a PEM file into a form the standard APIs can consume.

JEP 538 brings a second preview of a proper PEM API for Java. It provides a clean, type-safe way to work with PEM data across all the standard cryptographic types. Java 26 refines the preview from Java 25 with several changes: the PEM API moves from a record to a class, a new CryptoException type is introduced for runtime cryptographic errors, and the DEREncodable interface is renamed to BinaryEncodable to more accurately describe what it represents. This is the kind of developer experience improvement that sounds minor until you spend an afternoon debugging a certificate chain loading issue and wish the platform had given you clearer abstractions ten years ago.

7. The Vector API: Eleven Incubations and Still Not Done

Modern CPUs contain specialised hardware that can apply the same operation to multiple data values simultaneously, a technique called SIMD, or Single Instruction Multiple Data. Instead of adding two arrays of numbers element by element in a loop, SIMD instructions can process eight or sixteen pairs of numbers in a single clock cycle. This kind of instruction-level parallelism is foundational to high performance numeric computing, including the matrix multiplications that dominate AI inference and training workloads.

The Java Vector API provides a way to write Java code that explicitly uses these SIMD instructions, expressing vector computations that the JIT compiler maps directly to hardware vector instructions on x64 and AArch64 processors. Before this API, developers who needed SIMD-level performance in Java had to use JNI, the Java Native Interface, to call native C or C++ code, accept whatever auto-vectorisation the JIT compiler happened to produce, or move to a different language entirely for performance-critical paths.

JEP 534 marks the eleventh incubation of the Vector API, which began back in Java 16. That number is uncomfortable. An API that has been incubating for five years across eleven releases is caught in a genuine dependency on Project Valhalla’s value types, which would allow vectors to be represented as true value objects rather than heap-allocated approximations. Until Valhalla lands, finalising the Vector API would lock in design decisions that value types could improve. Oracle explicitly cites the Vector API as one of Java’s five AI-relevant features in this release. The long-term plan is sound. The timeline is frustrating.

8. Structured Concurrency Reaches Its Sixth Preview

Concurrency, the ability to do multiple things at the same time, has always been one of Java’s strengths and one of its most dangerous surfaces. Traditional thread-based concurrency in Java is unstructured: you can spawn a thread at any point, from any context, and that thread’s lifetime is completely independent of the code that created it. This makes it easy to write code where a parent operation completes or fails while child threads continue running invisibly in the background, holding resources, writing to shared state, and generating errors that nobody is listening for. These bugs are notoriously difficult to reproduce, diagnose, and fix.

Structured concurrency, proposed by JEP 535 in its sixth preview, introduces a different model borrowed from structured programming principles. A structured concurrency scope is a block of code that treats a group of related tasks as a single unit of work. When the scope exits, whether normally or through an exception, all spawned subtasks are either complete or cancelled. The lifetime of child tasks is always bounded by the lifetime of their parent scope. This makes concurrent code easier to reason about, easier to debug, and significantly harder to leak threads or resources from.

The feature builds naturally on virtual threads, the lightweight thread implementation that became a stable feature in Java 21. Virtual threads allow you to write blocking code in a simple sequential style without the performance cost of pinning an OS thread per request. Structured concurrency takes that a step further, giving you a principled way to compose groups of virtual threads into coherent operations with clear lifecycle semantics. For complex service orchestration, which is the daily reality of any large banking platform, this combination represents a substantial improvement over raw thread management or reactive programming chains.

9. Final Field Mutation Warnings: Preparing for a More Honest Language

In Java, a field declared final is supposed to be immutable. Once set in a constructor or static initialiser, it cannot be reassigned. The JVM relies on this guarantee to apply important optimisations, and concurrent code relies on it for correctness: the Java memory model provides specific visibility guarantees for final fields that it does not provide for ordinary fields. The word final is supposed to mean something.

It does not always mean what it says. Java’s reflection API, which allows code to inspect and manipulate classes and their members at runtime, includes a method called setAccessible(true) that can bypass access controls including finality. The sun.misc.Unsafe API, a low-level backdoor that many popular frameworks and serialisation libraries have depended on for years, allows the same. This capability has been used extensively in dependency injection frameworks, serialisation libraries, and mocking frameworks where it is genuinely convenient but semantically incorrect.

JEP 500 introduces warnings for uses of deep reflection to mutate final fields. This is a deprecation-style signal to the ecosystem that a future Java version will restrict this behaviour by default. The warnings in Java 26 give library authors and framework maintainers time to remove their reliance on this pattern before it becomes an error. Making final actually mean final is also a prerequisite for some of what Project Valhalla needs to deliver: value classes, which have no identity and are purely defined by their field values, cannot be safely optimised if the JVM cannot trust that those fields are genuinely immutable.

10. Applet API: Finally Gone

Java applets were a technology from the mid-1990s that allowed Java programs to run inside web browsers through a browser plugin. They were a genuinely interesting idea at the time, promising write-once-run-anywhere interactive web content in an era before JavaScript matured as a platform. In practice, applets were slow to load, required users to install and maintain the Java browser plugin separately, became a persistent source of security vulnerabilities, and were rendered obsolete by the combination of Flash and later the modern JavaScript ecosystem. Major browsers began removing applet support around 2015, and the Java browser plugin itself was discontinued in 2016.

Oracle deprecated the Applet API for removal in Java 17 back in 2021, giving the ecosystem five years of notice. JEP 504 completes that process. The Applet API is gone. Its presence had been a maintenance burden, a source of confusion for developers new to the platform, and a reminder of a technology decision from thirty years ago that long outlived its usefulness. Removing it makes Java smaller, simpler, and slightly more secure.

What Java 26 Is Really Saying

Taken together, the ten JEPs in Java 26 communicate a clear priority ordering. Performance is first, addressed through AOT caching improvements, G1 throughput gains, and the continued Vector API work. Correctness and developer experience are second, addressed through lazy constants, structured concurrency, and the PEM API. Language hygiene is third, addressed through the primitive patterns progression and the final field mutation warnings.

The modest size of the feature set compared to some previous releases reflects deliberate restraint. Java 26 is preparing the platform for Project Valhalla, the long-running OpenJDK effort to introduce value classes and objects: types that carry no identity, are defined purely by their fields, and can be flattened into arrays and stack frames by the JVM rather than always living on the heap. Value classes would represent the most fundamental change to the Java object model since generics arrived in Java 5, and several of the Java 26 changes read as prerequisite steps toward that goal.

For those of us running large scale Java workloads in production, the practical takeaway from Java 26 is straightforward. The startup and throughput improvements are real and compounding across releases. The HTTP/3 support is production ready. The concurrency and pattern matching features are converging toward finalisation. Java continues to be a platform that improves methodically without breaking what already works, which remains its most underrated competitive advantage.


Published on andrewbaker.ninja. Andrew Baker is Chief Information Officer at Capitec Bank and writes about engineering leadership, banking technology, and platform architecture.

Leave a Reply

Your email address will not be published. Required fields are marked *