Java-buildpack: Impossible to run Spring Boot applications with default configuration and <512M

Created on 31 Dec 2018  ·  8Comments  ·  Source: cloudfoundry/java-buildpack

The fact that it is not possible to run a Spring Boot hello world (MVC or WebFlux) with less than 512M and that most non-trivial applications requires at least 1G on Cloud Foundry is pretty frustrating for users, and give the false impression that Spring Boot can't run with less than 1G of memory.

We are doing some work with @dsyer, Boot team and others to optimize Spring Framework and Spring Boot to generate less GC pressure and have lower memory consumption, but I tend to think we could also improve Cloud Foundry to handle that in a better way.

I have raised https://github.com/cloudfoundry/java-buildpack-memory-calculator/issues/24 about the memory calculation rule.

Another point: if the number of class is computed by counting all the .class files in the application including its dependencies, it is not a reliable source of information for Spring applications that has not a very fine grained JAR granularity and will load effectively just a small ratio of the classes available on Spring JARs.

The problem is even more visible with the optimizations we have shipped in Spring Boot 2.1, and the optimization we are currently working on for Spring Boot 2.2.

My gut feeling is that typical users knows how much Xmx memory is required locally by their application, and we should maybe use that information by default.

question

Most helpful comment

It is possible to run with less than 512M by configuring the memory calculator.
Following manifest works for me (spring boot + webflux).

applications:
- name: myapp
  path: target/myapp-0.0.1-SNAPSHOT.jar
  memory: 256m
  env:
    JAVA_OPTS: '-XX:ReservedCodeCacheSize=32M -XX:MaxDirectMemorySize=32M'
    JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {stack_threads: 30}, jre: {version: 11.+}]'

A vanilla webflux app actually runs with only 60MB ram.

I'm a big fan of memory calculator since most developers tend to care about only heap size and get an unexpected OOME (e.g. Metaspace) but I agree that we could improve it.
Default thread size (300?) in the calculator would be too large for at least webflux.

For beginners, it's pretty hard to find the way to customize the memory size.
Configuration examples by usecase in README would be very helpful.

All 8 comments

It is possible to run with less than 512M by configuring the memory calculator.
Following manifest works for me (spring boot + webflux).

applications:
- name: myapp
  path: target/myapp-0.0.1-SNAPSHOT.jar
  memory: 256m
  env:
    JAVA_OPTS: '-XX:ReservedCodeCacheSize=32M -XX:MaxDirectMemorySize=32M'
    JBP_CONFIG_OPEN_JDK_JRE: '[memory_calculator: {stack_threads: 30}, jre: {version: 11.+}]'

A vanilla webflux app actually runs with only 60MB ram.

I'm a big fan of memory calculator since most developers tend to care about only heap size and get an unexpected OOME (e.g. Metaspace) but I agree that we could improve it.
Default thread size (300?) in the calculator would be too large for at least webflux.

For beginners, it's pretty hard to find the way to customize the memory size.
Configuration examples by usecase in README would be very helpful.

@making I have updated the title of this issue to make it more clear that it is about the default configuration being sub-optimal for (at least) Boot applications. It is of course totally possible to run Boot applications with 256M and custom configuration, but the default memory configuration seems to me very very far from accurate and that's where I would like we make some progress, because it impacts a lot of users.

The number of threads you mention for WebFlux application is indeed a very interesting point, the build pack could provide additional added value by detecting what kind of app it is. It may be tricky since some MVC application can use Reactive WebClient, but I am sure we can do something more clever and accurate.

Your custom memory configuration highlights IMO that something is wrong in the automatic configuration mechanism. As explained in https://github.com/cloudfoundry/java-buildpack-memory-calculator/issues/24, if I specify explicitly the number of classes (between 8000 and 10000) effectively used by Boot apps, the parameter generated is -XX:ReservedCodeCacheSize=240M, where you specify -XX:ReservedCodeCacheSize=32M. This difference is really huge, could we make a more educated guess?

Also 8000 or 10000 are number of classed effectively used by Boot apps. If the build pack is computing that from the number of classes in the app + dependencies, I tend to think the number of classes specified to the memory calculator will be artificially high (I still need to check the value currently guessed but the build pack) due to the nature of Spring Framework JARs.

I am also wondering if we leverage the new container memory options that are available in latest Java 8 and in Java 11. They are designed for Docker but I guess we could benefit as well of this in Cloud Foundry. Is Java runtime aware that we are running in containers despite CF not using Docker? Do we leverage options like -XX:InitialRAMPercentage, -XX:MaxRAMPercentage or -XX:MinRAMPercentage?

It is a tricky problem @sdeleuze. Does the java-buildpack use java defaults, that we assume some thought went into java setting, allowing applications to function, by default, more like they do when not running in CF? Or does the java-buildpack change defaults to make initial experience better but then leave users searching for answers when the application doesn't run as expected not in CF?

Today the java-buildpack has chosen to use java and tomcat defaults trusting in those values as a typical application today. I personally like that approach more than changing java defaults that may affect the application in ways not very visible/explicit to the user.

As far as the new properties go, my understanding of the RAMPercentage properties is that they simply automatically set heap values based on a percentage of ram available. No thoughts for non heap memory requirements like code cache size, thread stack size, metaspace, GC memory overhead, etc. That work is still left as an exercise for the user to guess at how much non heap memory their application needs. I look forward to a day when java can simply manage all the memory pools properly to keep an application within the memory constraints of a container. But I suspect we are still a long ways away from that.

As a side note this is the JAVA_OPTS and jre config I set for my applications that I need to be stable with low memory and speed isn't a factor.

  • I set MaxMetaSize to a discovered through profiling.
    JAVA_OPTS
-Xss256K -Xms1M -XX:+UseSerialGC -Djava.compiler=none -XX:ReservedCodeCacheSize=2496k -XX:MaxDirectMemorySize=1M

jre config

  stack_threads: 20
Was this page helpful?
0 / 5 - 0 ratings

Related issues

vijayantony picture vijayantony  ·  27Comments

chrylis picture chrylis  ·  10Comments

aknobloch picture aknobloch  ·  8Comments

ghost picture ghost  ·  26Comments

thorntonrp picture thorntonrp  ·  4Comments