-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Segment Cache] Support for non-PPR projects/pages #73960
base: canary
Are you sure you want to change the base?
Conversation
Failing test suitesCommit: be2d370
Expand output● next.rs api › should allow to write app edge page to disk
● next.rs api › should allow to write app edge page to disk
● next.rs api › should allow to write app Node.js page to disk
● next.rs api › should allow to write app Node.js page to disk
● next.rs api › should allow to write app edge route to disk
● next.rs api › should allow to write app Node.js route to disk
● next.rs api › should have working HMR on client-side change on a page 0
● next.rs api › should have working HMR on client-side change on a page 1
● next.rs api › should have working HMR on client-side change on a page 2
● next.rs api › should have working HMR on server-side change on a page 0
● next.rs api › should have working HMR on server-side change on a page 1
● next.rs api › should have working HMR on server-side change on a page 2
● next.rs api › should have working HMR on client and server-side change on a page 0
● next.rs api › should have working HMR on client and server-side change on a page 1
● next.rs api › should have working HMR on client and server-side change on a page 2
● next.rs api › should have working HMR on client-side change on a app page 0
● next.rs api › should have working HMR on client-side change on a app page 1
● next.rs api › should have working HMR on client-side change on a app page 2
● next.rs api › should have working HMR on server-side change on a app page 0
● next.rs api › should have working HMR on server-side change on a app page 1
● next.rs api › should have working HMR on server-side change on a app page 2
Read more about building and testing Next.js in contributing.md. |
Stats from current PRDefault Build (Increase detected
|
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
buildDuration | 18.7s | 15.6s | N/A |
buildDurationCached | 14.7s | 12.4s | N/A |
nodeModulesSize | 410 MB | 411 MB | |
nextStartRea..uration (ms) | 473ms | 469ms | N/A |
Client Bundles (main, webpack) Overall increase ⚠️
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
1187-HASH.js gzip | 50.9 kB | 54 kB | |
8276.HASH.js gzip | 169 B | 168 B | N/A |
8377-HASH.js gzip | 5.36 kB | 5.36 kB | N/A |
bccd1874-HASH.js gzip | 53 kB | 53 kB | N/A |
framework-HASH.js gzip | 57.5 kB | 57.5 kB | N/A |
main-app-HASH.js gzip | 232 B | 235 B | N/A |
main-HASH.js gzip | 34.1 kB | 34 kB | N/A |
webpack-HASH.js gzip | 1.71 kB | 1.71 kB | N/A |
Overall change | 50.9 kB | 54 kB |
Legacy Client Bundles (polyfills)
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
polyfills-HASH.js gzip | 39.4 kB | 39.4 kB | ✓ |
Overall change | 39.4 kB | 39.4 kB | ✓ |
Client Pages
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
_app-HASH.js gzip | 193 B | 193 B | ✓ |
_error-HASH.js gzip | 193 B | 193 B | ✓ |
amp-HASH.js gzip | 512 B | 510 B | N/A |
css-HASH.js gzip | 343 B | 342 B | N/A |
dynamic-HASH.js gzip | 1.84 kB | 1.84 kB | ✓ |
edge-ssr-HASH.js gzip | 265 B | 265 B | ✓ |
head-HASH.js gzip | 363 B | 362 B | N/A |
hooks-HASH.js gzip | 393 B | 392 B | N/A |
image-HASH.js gzip | 4.49 kB | 4.49 kB | N/A |
index-HASH.js gzip | 268 B | 268 B | ✓ |
link-HASH.js gzip | 2.35 kB | 2.34 kB | N/A |
routerDirect..HASH.js gzip | 328 B | 328 B | ✓ |
script-HASH.js gzip | 397 B | 397 B | ✓ |
withRouter-HASH.js gzip | 323 B | 326 B | N/A |
1afbb74e6ecf..834.css gzip | 106 B | 106 B | ✓ |
Overall change | 3.59 kB | 3.59 kB | ✓ |
Client Build Manifests
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
_buildManifest.js gzip | 749 B | 746 B | N/A |
Overall change | 0 B | 0 B | ✓ |
Rendered Page Sizes
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
index.html gzip | 524 B | 524 B | ✓ |
link.html gzip | 539 B | 538 B | N/A |
withRouter.html gzip | 519 B | 520 B | N/A |
Overall change | 524 B | 524 B | ✓ |
Edge SSR bundle Size Overall increase ⚠️
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
edge-ssr.js gzip | 128 kB | 128 kB | N/A |
page.js gzip | 204 kB | 205 kB | |
Overall change | 204 kB | 205 kB |
Middleware size
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
middleware-b..fest.js gzip | 671 B | 667 B | N/A |
middleware-r..fest.js gzip | 155 B | 156 B | N/A |
middleware.js gzip | 31.2 kB | 31.2 kB | N/A |
edge-runtime..pack.js gzip | 844 B | 844 B | ✓ |
Overall change | 844 B | 844 B | ✓ |
Next Runtimes Overall increase ⚠️
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
523-experime...dev.js gzip | 322 B | 322 B | ✓ |
523.runtime.dev.js gzip | 314 B | 314 B | ✓ |
app-page-exp...dev.js gzip | 323 kB | 325 kB | |
app-page-exp..prod.js gzip | 127 kB | 128 kB | |
app-page-tur..prod.js gzip | 140 kB | 141 kB | |
app-page-tur..prod.js gzip | 135 kB | 136 kB | |
app-page.run...dev.js gzip | 313 kB | 316 kB | |
app-page.run..prod.js gzip | 123 kB | 124 kB | |
app-route-ex...dev.js gzip | 37.4 kB | 37.4 kB | ✓ |
app-route-ex..prod.js gzip | 25.5 kB | 25.5 kB | ✓ |
app-route-tu..prod.js gzip | 25.5 kB | 25.5 kB | ✓ |
app-route-tu..prod.js gzip | 25.3 kB | 25.3 kB | ✓ |
app-route.ru...dev.js gzip | 39 kB | 39 kB | ✓ |
app-route.ru..prod.js gzip | 25.3 kB | 25.3 kB | ✓ |
pages-api-tu..prod.js gzip | 9.69 kB | 9.69 kB | ✓ |
pages-api.ru...dev.js gzip | 11.6 kB | 11.6 kB | ✓ |
pages-api.ru..prod.js gzip | 9.68 kB | 9.68 kB | ✓ |
pages-turbo...prod.js gzip | 21.7 kB | 21.7 kB | ✓ |
pages.runtim...dev.js gzip | 27.4 kB | 27.4 kB | ✓ |
pages.runtim..prod.js gzip | 21.7 kB | 21.7 kB | ✓ |
server.runti..prod.js gzip | 916 kB | 916 kB | N/A |
Overall change | 1.44 MB | 1.45 MB |
build cache Overall increase ⚠️
vercel/next.js canary | acdlite/next.js segment-cache-non-ppr-support | Change | |
---|---|---|---|
0.pack gzip | 2.05 MB | 2.08 MB | |
index.pack gzip | 72.1 kB | 73.9 kB | |
Overall change | 2.13 MB | 2.16 MB |
Diff details
Diff for edge-ssr.js
Diff too large to display
Diff for 1187-HASH.js
Diff too large to display
Diff for main-HASH.js
Diff too large to display
Diff for app-page-exp..ntime.dev.js
Diff too large to display
Diff for app-page-exp..time.prod.js
Diff too large to display
Diff for app-page-tur..time.prod.js
Diff too large to display
Diff for app-page-tur..time.prod.js
Diff too large to display
Diff for app-page.runtime.dev.js
Diff too large to display
Diff for app-page.runtime.prod.js
Diff too large to display
Diff for server.runtime.prod.js
Diff too large to display
This was an oversight on my part when I originally implemented the per- segment prefetches. `onError` needs to return an error digest.
Originally I gated per-segment prefetch generation on the PPR flag, because I thought the client Segment Cache would require PPR to be enabled on the server. However, since then the strategy has evolved and I do think we can roll out the Segment Cache independently of PPR. Dynamic pages without PPR won't be able to take full advantage of the Segment Cache, but if the page is fully static then there's no reason we can't implement all the same behavior. So during per-segment prerendering, I've changed the feature condition to check for the `clientSegmentCache` flag instead of the PPR one.
This is a small refactor to allow creating a cache empty entry without also triggering a server request. Currently these are combined into the same phase, because there's no case where one operation happens without the other. However, I need to implement additional prefetching strategies. For example, sometimes a segment's data will already be available as part of a different server response. To support this, I've split the Pending CacheStatus into two separate fields: - Empty: The cache entry has no data, and there's no pending request to fetch it. - Pending: The cache entry has no data, and there _is_ a pending request to fetch it. This is a refactor only, so there should be no change to external behavior.
When I originally set up the Segment Cache transport types, I had the notion that the client should never have to compute the request key that is sent to the server. That way we could, say, encrypt the keys on the server to make them unguessable. But once we added the concept of access tokens for that purpose, this motivation was gone. The remaining motivation was to keep the contract as simple as possible. However, there's a lot of existing code that relies on manipulating the "decoded" form of the segment values. Since we're sending the full segment value anyway, it's wasteful to also send the encoded form, given that it can be trivially re-derived by the client. My immediate motivation is that, I need to be able to construct a cache key from a FlightRouterState. As a bonus, this will also get us closer to being able to remove the dynamic param values from the server response, a separate goal that we're currently working on. We can revisit this once if/when we're able to remove the existing cases where the full segment is needed by the client.
It doesn't look like this argument is logically needed. No callers pass it as an option except for the recursive call, and the recursive call will never set it to `true` because of the `renderComponentsOnThisLevel` branch that happens earlier in the function. I'm guessing it was needed at one point but then the logic changed.
During a non-PPR prefetch, where we render up to the nearest loading boundary, we should be able to skip segments that are already cached on the client. To do this, we need to be able to tell the server that a segment is already cached while also indicating where the "new" part of the navigation starts — that is, the part inside the shared layout. To do this, I added a new kind of marker to FlightRouterState called "inside-shared-layout". It's similar to the "refresh" marker but has different semantics. I tried my best to explain how it's used in the code comments. This implements the marker but does not yet change any behavior on the client. I'm submitting this as its own PR to confirm that none of the existing behavior regresses. I'll follow up with the client changes in a separate PR. As another follow-up, we're overdue for a redesign of the protocol for sending dynamic requests. I'm not sure it makes sense to use the FlightRouterState type, given that it's overloaded with so many other concerns.
This implements support for the Segment Cache in projects/pages where PPR is not enabled. I originally thought the Segment Cache would be tied to the roll out of PPR, because to take full advantage of per-segment caching, you need PPR to generate static shells for each segment. However, as I was investigating how to support the incremental PPR opt-in API, where some pages support PPR and others don't — perhaps just by falling back to the old cache implementation — I realized that there are benefits to per-segment caching even if the requests themselves are not per-segment. For example, when performing a non-PPR-style prefetch, the server diffs the prefetched page against a description of the base page sent by the client. In the old caching implementation, the base tree represented whatever the current page was at the time of the prefetch. In the Segment Cache implementation, we improve on this by telling the server to exclude any shared layouts that are already in the client cache. This ended up requiring more code than I would have preferred, because dynamic server responses and per-segment server responses have different formats and constraints. However, I realized I was going to have to implement most of this regardless in order to support <Link prefetch={true}>, which performs a full dynamic request. Once more of the pieces are settled, we can simplify the implementation by unifying/redesigning some of the data structures, especially FlightRouterState, which has become overloaded with many different concerns. But we need to land some of our experiments first — one reason there's more code than you might expect is because of all the different combinations of features/experiments we need to support simultaneously. While it's likely that PPR will be enabled by default sometime within the next few release cycles, supporting non-PPR projects means we have a pathway to rolling out additional prefetching improvements (like prioritization and cancellation) independently of the PPR release.
111067c
to
be2d370
Compare
Based on:
This implements support for the Segment Cache in projects/pages where PPR is not enabled.
I originally thought the Segment Cache would be tied to the roll out of PPR, because to take full advantage of per-segment caching, you need PPR to generate static shells for each segment. However, as I was investigating how to support the incremental PPR opt-in API, where some pages support PPR and others don't — perhaps just by falling back to the old cache implementation — I realized that there are benefits to per-segment caching even if the requests themselves are not per-segment.
For example, when performing a non-PPR-style prefetch, the server diffs the prefetched page against a description of the base page sent by the client. In the old caching implementation, the base tree represented whatever the current page was at the time of the prefetch. In the Segment Cache implementation, we improve on this by telling the server to exclude any shared layouts that are already in the client cache.
This ended up requiring more code than I would have preferred, because dynamic server responses and per-segment server responses have differentformats and constraints. However, I realized I was going to have to implement most of this regardless in order to support
<Link prefetch={true}>
, which performs a full dynamic request.Once more of the pieces are settled, we can simplify the implementation by unifying/redesigning some of the data structures, especially FlightRouterState, which has become overloaded with many different concerns. But we need to land some of our experiments first — one reason there's more code than you might expect is because of all the different combinations of features/experiments we need to support simultaneously.
While it's likely that PPR will be enabled by default sometime within the next few release cycles, supporting non-PPR projects means we have a pathway to rolling out additional prefetching improvements (like prioritization and cancellation) independently of the PPR release.