Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nested Future blocks do not run in parallel in Scala 2.13.x #12089

Closed
lihaoyi opened this issue Jul 19, 2020 · 76 comments · Fixed by scala/scala#9270
Closed

Nested Future blocks do not run in parallel in Scala 2.13.x #12089

lihaoyi opened this issue Jul 19, 2020 · 76 comments · Fixed by scala/scala#9270
Assignees
Milestone

Comments

@lihaoyi
Copy link

lihaoyi commented Jul 19, 2020

reproduction steps

In the code below, we can see that in Scala 2.13.2, the three nested Future blocks run sequentially, whereas in Scala 2.12.12 they run in parallel. I would expect them to run in parallel in both cases, as I am spawning a small number of Futures and have 16 cores on this machine that should be able to run them in parallel.

$ sbt '++2.13.2!; console'
Welcome to Scala 2.13.2 (OpenJDK 64-Bit Server VM, Java 1.8.0_252).
Type in expressions for evaluation. Or try :help.

scala> import scala.concurrent._, ExecutionContext.Implicits._, duration.Duration.Inf

scala> def slow(key: String) = Future{ println(s"$key start"); Thread.sleep(1000); println(s"$key end"); key }

scala> def runAsyncSerial(): Future[Seq[String]] = slow("A").flatMap { a => Future.sequence(Seq(slow("B"), slow("C"), slow("D"))) }

scala> Await.result(runAsyncSerial(), Inf)
A start
A end
D start
D end
C start
C end
B start
B end
val res0: Seq[String] = List(B, C, D)

$ sbt '++2.12.12!; console'
Welcome to Scala 2.12.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_252).

scala> import scala.concurrent._, ExecutionContext.Implicits._, duration.Duration.Inf

scala> def slow(key: String) = Future{ println(s"$key start"); Thread.sleep(1000); println(s"$key end"); key }

scala> def runAsyncSerial(): Future[Seq[String]] = slow("A").flatMap { a => Future.sequence(Seq(slow("B"), slow("C"), slow("D"))) }

scala> Await.result(runAsyncSerial(), Inf)
A start
A end
B start
C start
D start
B end
C end
D end
res0: Seq[String] = List(B, C, D)

AFAICT this is a minimal repro; either (A) wrapping the Thread.sleep in blocking() or (B) using an explicit Threadpool execution context or (C) removing the flatMap wrapper all seem to make it run correctly parallel in 2.13.2. This applies to all versions 2.13.x

OTOH -Ydelambdafy:inline or replacing the flatMap with map + flatten seems to make no difference. Neither does changing the JVM version from 8 to 11, or swapping between sbt console and Ammonite, or moving it from the REPL into a .sc script or full .scala file in a Mill project.

In the above example I'm spawning a small number of Futures, but in the case where I hit this in the wild I was spawning 100s of Futures and having them all run serially instead of in parallel, resulting in a 16x slowdown over what I was expecting

Notably, this slowdown applies regardless of how long the operations are: an operation that takes 1000ms to run 16x parallel now would take 16000ms, but an operation that takes 1ms to run 16x parallel would now take 16ms. Both could be equally bad depending on where it happens (e.g. in a lot of backend services, extra 15ms on every operation would violate SLAs)

Furthermore, the Futures documentation precisely and accurately describes the behavior of the ExecutionContext.global pre-2.13, as @jimm-porch has pointed out (https://docs.scala-lang.org/overviews/core/futures.html):

The number of concurrently blocking computations can exceed the parallelism level only if each blocking call is wrapped inside a blocking call (more on that below).

Last but not least, you must remember that the ForkJoinPool is not designed for long lasting blocking operations.

This description is no longer accurate as of 2.13. From what I have gathered in the discussion on this thread, the 2.13 behavior is better described as:

The number of any concurrent computations can exceed one only if each call is wrapped in a blocking call

Last but not least, you must remember that the ForkJoinPool is not designed for CPU-bound operations.

@lihaoyi lihaoyi changed the title Nested futures do not run in parallel in Scala 2.13.x Nested Future blocks do not run in parallel in Scala 2.13.x Jul 19, 2020
@SethTisue
Copy link
Member

SethTisue commented Jul 19, 2020

my understanding is that this is working as designed, as per scala/scala#7470, and that you're expected to deal with it with:

wrapping the Thread.sleep in blocking()

cc @viktorklang

@SethTisue
Copy link
Member

SethTisue commented Jul 19, 2020

(Perhaps https://github.com/scala/scala/releases/tag/v2.13.0 should have some wording about this.)

@lihaoyi
Copy link
Author

lihaoyi commented Jul 19, 2020

A silent change in time taken to run the simplest of behaviors from O(1) to O(n) is certainly not what I would expect when I read “faster and more robust” 😛

The linked PR talks about "combinators". I'm not using any combinators here, except from the wrapping flatMap which is really irrelevant. The only thing I'm doing is spawning Future{...} blocks

@som-snytt
Copy link

On the gitter, I said, "it would be nice if the docs explained the idiom and also why it isn't self-explanatory."

The side-effecting onComplete and foreach are get-out-jail-free, perhaps because of the guidance that you don't care when they run or in what order. Maybe also because it represents extra or other work.

Someone also quoted the code comment for batching that is not in the docs. (I just remembered that I've had to quote before, too. Apparently Scaladoc lets you include --access again for non-public members.) The quote breaks off before the caveat:

However,
 * if tasks passed to the Executor are blocking
 * or expensive, this optimization can prevent work-stealing
 * and make performance worse.
 * A batching executor can create deadlocks if code does
 * not use `scala.concurrent.blocking` when it should,
 * because tasks created within other tasks will block
 * on the outer task completing.
 * This executor may run tasks in any order, including LIFO order.

As shown above, the comment should read, "especially LIFO order."

@viktorklang
Copy link

A silent change in time taken to run the simplest of behaviors from O(1) to O(n) is certainly not what I would expect when I read “faster and more robust” 😛

If you didn't do the Thread.sleep() it would most likely be faster. Also, not surrounding it in blocking{} is inadvisable.

The linked PR talks about "combinators". I'm not using any combinators here, except from the wrapping flatMap which is really irrelevant. The only thing I'm doing is spawning Future{...} blocks

From the Future.apply scaladoc:

The following expressions are equivalent:

val f1 = Future(expr)
val f2 = Future.unit.map(_ => expr)
val f3 = Future.unit.transform(_ => Success(expr))

If you want to run a side-effect you should consider Future.unit.foreach(_ => effect) or Future.unit.onComplete(_ => effect)

@lihaoyi
Copy link
Author

lihaoyi commented Jul 20, 2020

I don't want to run a side effect, I just want to be able to parallelize these computations using Futures, the same way I have been doing since Scala 2 introduced Futures and Promises in SIP14. What puzzles me most is that surely I can't be the first person bumping into these issues? I assume that other people also rely on Futures to parallelize things? The above code snippet isn't exactly an esoteric edge case.

The time taken to run each individual task seems irrelevant here: we have code that used to be parallel, but is now not, and the slowdown can be made arbitrarily large depending on how many Futures I kick off. I hit this kicking off 100ish Futures on a 16 core machine and having the all complete sequentially, a 16x slowdown. If I was running on a beefy EC2 instance like a m5.24xlarge with 96 cores it would have been a 96x slowdown. It would take quite a lot of reduction in fixed overhead to overcome a 96x slowdown due to loss of parallelization.

Do Futures created in other Futures never run in parallel any more when using the default ExecutionContext? If not, then when do they still in parallel? This isn't just a theoretical concern: in one part of our system it was causing 16x slowdowns, in another it was causing mysterious 5-10 second delays in our asynchronous push-update code paths. Both of these are real user-facing consequences that I as the product owner can't brush under the rug, so I'd like to be able to understand the mechanism that's causing this to behave as it does

@viktorklang
Copy link

@lihaoyi Whether or not something will run in parallel is up to the ExecutionContext. With the default EC it will parallelize "top-most" Futures (assuming that it has more than one thread) and break off eligible "callbacks" once a blocking{} is encountered. Assuming you have written your code to take an implicit ExecutionContext you can pass in one which gives you the desired behavior—there is always a tradeoff between performance and coordination overhead.

What could be considered is a system property to turn batching off, if one wants to maximize parallelism.

@lihaoyi
Copy link
Author

lihaoyi commented Jul 20, 2020

With the default EC it will parallelize "top-most" Futures (assuming that it has more than one thread) and break off eligible "callbacks" once a blocking{} is encountered

Thanks Viktor! This is exactly what I wanted to know. I'm aware of the tradeoff between parallelism and coordination overhead. I must say that I have never seen the blocking{} semantics explained before in such a clear and concise fashion

While I don't really agree with changing the choice of tradeoff so drastically for the global execution context in Scala 2.13, it is what it is.

@SethTisue is there any way we can advertise this large change in behavior more broadly? I do think of myself as someone who knows my way around Scala and Futures, but this change in Scala 2.13 completely blindsided me. I spent months wondering why my Futures were sometimes not scheduling promptly (but never in a minimal dev environment, only in production under load!) and a full day trying to minimize the test case presented above from a program behaving 16x slower than it should. I imagine the vast majority of Scala developers would have even more difficulty than I would if they bump into this behavior change in the wild.

@lihaoyi
Copy link
Author

lihaoyi commented Jul 20, 2020

Also, if this is expected behavior should we close this issue?

@som-snytt
Copy link

I was going to suggest keeping this issue open for improving the docs for clarification. After re-reading the Scaladocs, Javadocs, and the overview section, I understand that task scheduling is up to the execution context, that fork-join tasks should be of an appropriate size, that I don't know what API incurs a submit or a fork in the global context, and that I'm not convinced blocking is a great way to say "this bit of code doesn't block but is an appropriate unit of work for work-stealing." (The Javadocs say a unit of work should be yea-big so scheduling overhead doesn't swamp useful work. What if future used a macro that examined byte code to estimate benefitsFromBatching?)

The overview has a stub section for "extensions", so maybe a point of adumbration would show how to write some code to improve fork-join ergonomics.

Alternatively, a blog post about the Klang levels, K0 thru K3, to correlate expertise with what usages one should attempt. Maybe the API should all take an implicit K-level, import concurrent.K1 would enable certain combinators but no blocking.

I see I was due for a semi-annual review of how do futures work again? because my last was for #11827 which was also unexpectedly expected behavior. There must be an Onion headline along the lines of, "Coder most surprised to learn they violated the principle of least surprise."

@viktorklang
Copy link

@som-snytt

I'm not convinced blocking is a great way to say "this bit of code doesn't block but is an appropriate unit of work for work-stealing."

That's not really what happens though—if there are enqueued operations locally (on the current thread) then those will be submitted to be executed to the pool prior to running the blocking chunk of code, in order to preserve liveness under blocking. This paired with ManagedBlockers in FJP works rather nicely together by trying to ensure that there are threads available to process non-blocking operations in the face of blocking operations.

@nadavwr
Copy link

nadavwr commented Jul 21, 2020

Prior to 2.13 I remember first seeing blocking used to convey non-blocking fork in the context of Akka's default dispatcher, which at the time was already batching.

Now that ExecutionContext is batching in Scala 2.13, perhaps adding fork as an alias to blocking would be advisable? This would better convey user intent—the user intends the nested Future not to be linked to the same fiber.

@mushtaq
Copy link

mushtaq commented Jul 21, 2020

@viktorklang a clarifying question:

If the slow method above does not use Thread.sleep, instead it represents a non-blocking web-service call (say using Akka Http client), then the nested invocations of slow within runAsyncSerial will run in parallel, right?

@viktorklang
Copy link

@mushtaq It depends on how/where the async web-service calls are processed—they would be initiated serially by the Global EC, but their processing may occur on some other pool in parallel.

@viktorklang
Copy link

@nadavwr

Now that ExecutionContext is batching in Scala 2.13, perhaps adding fork as an alias to blocking would be advisable? This would better convey user intent—the user intends the nested Future not to be linked to the same fiber.

fork wouldn't be semantically correct since the block of code inside blocking{} is not forked, but any locally (thread locally) enqueued Future-operations would be resubmitted to the pool, so that they can be processed by some other thread.

@viktorklang
Copy link

@lihaoyi One option would be to treat Future.apply as a fork (semantically) which would then remove the similarity from Future.unit.map(_ => block) in which case users could themselves select if they wanted fork-like semantics or not.

@alexandru
Copy link

Future.apply being a "fork" might be the correct option because users expect it.

In tutorials on Future I've seen it mentioned that Future.apply forks and that if people don't want that then Future.successful should be used.

If an alternative is desired that's only memory-safe (i.e. one that registers a Runnable with Batching in the underlying thread-pool), then that alternative should be separate from Future.apply. I'd suggest Future.trampoline 🙂

@nadavwr
Copy link

nadavwr commented Jul 21, 2020

@viktorklang Right. Perhaps def fork[A](f: => Future[A]): Future[A] = blocking(f) then?

Future.apply does sound pleasantly unsurprising.

@mushtaq
Copy link

mushtaq commented Jul 21, 2020

@mushtaq It depends on how/where the async web-service calls are processed—they would be initiated serially by the Global EC, but their processing may occur on some other pool in parallel.

We are using the same EC (ActorSystem.executionContext ) for all non-blocking calls, including the web-client. If I understand you @viktorklang, nested Future.sequence will not parallelise in that case. If so, that is really sad 😞

I feel that this is a very fundamental change and even the long time scala.concurrent.Future users may not have understood this implication yet.

I think some guidance on how to retain the parallel nature of nested Future.sequence calls in purely asynchronous applications will help. Is there any role of blocking in such cases? Wrapping non-blocking web-client calls with blocking does not look right. Maybe, we should use a separate EC even for non-blocking web calls?

@viktorklang
Copy link

@mushtaq It completely depends on how ActorSystem.executionContext is implemented.

@viktorklang
Copy link

@nadavwr

Right. Perhaps def fork[A](f: => Future[A]): Future[A] = blocking(f) then?

No, that doesn't help.

This would though:

https://github.com/scala/scala/pull/9129/files

@lihaoyi
Copy link
Author

lihaoyi commented Jul 22, 2020

I don't think special casing Future.apply is the right way to go. I would expect the following to run in parallel as well:

def slow(key: String) = Future{ println(s"$key start") }.map{_ => Thread.sleep(1000); println(s"$key end"); key }

def runAsyncSerial(): Future[Seq[String]] = slow("A").flatMap { a => Future.sequence(Seq(slow("B"), slow("C"), slow("D"))) }

Is it possible to only batch things together when we're actually chaining operations on the same Future? I want this to be one batch

Future{ Future(...).map(...).map(...).map(...).map(...) }

And I want this to be two batches:

Future{ (Future(...).map(...).map(...), Future(...).map(...).map(...)) }

I think that would be intuitive and would not surprise anyone, whereas the current batching of completely unrelated futures is very surprising

@jrudolph
Copy link

jrudolph commented Jul 22, 2020

At least for Future.apply the 2.13 behavior is unfortunate, indeed.

I think that would be intuitive and would not surprise anyone, whereas the current batching of completely unrelated futures is very surprising

AFAICS this works as expected with Viktor's patch as long as you have a Future.apply in the beginning of each branch that you want to run in parallel. I'm not sure if it's guaranteed to be run in parallel or if it is just more likely. I'm inclined to think that it's guaranteed because all those forked initial future thunks will not be batched but will be put into task submission queues of the underlying EC. And each thread will first work on its batching queue before looking at other tasks, so the parallel tasks should be up for grabs for other threads.

Is it possible to only batch things together when we're actually chaining operations on the same Future? I want this to be one batch

Future{ Future(...).map(...).map(...).map(...).map(...) }

And I want this to be two batches:

Future{ (Future(...).map(...).map(...), Future(...).map(...).map(...)) }

The first example doesn't make sense because there's a single dependency chain between all tasks, so, of course, they must be run sequentially. The question is more if in

val a : Future[...] == ...
Future.sequence(a.map(longRunning), a.map(longRunning2))

those two long-running tasks would be run in parallel because they depend on the same original future. If that's not working with Viktor's patch, I'd still find it acceptable if it would require to use Future.apply around those long running chunks of work as well.

In general, it's tempting to simply use Future.apply or (map or whatever) for long-running CPU intensive tasks for parallelization but for best performance you will always need fine tuning for these kind of loads. Putting long running tasks on the main EC of your application will have almost the same bad effects of thread starvation as blocking tasks. Using blocking still wouldn't be the right thing because you exactly don't want to spawn extra threads if you are already CPU-bound. In many cases, you might want to either break down tasks into smaller pieces or run them on dedicated ECs to keep the main ECs free for asynchronous task handling.

@lihaoyi
Copy link
Author

lihaoyi commented Jul 22, 2020

AFAICS this works as expected with Viktor's patch as long as you have a Future.apply in the beginning of each branch that you want to run in parallel.

This is good enough. As you said, if both sides starts with a : Future[...], I would expect the maps to run in parallel. Future.apply shouldn't be special, what I want is unrelated tasks in the computation graph to run in parallel.

In general, it's tempting to simply use Future.apply or (map or whatever) for long-running CPU intensive tasks for parallelization but for best performance you will always need fine tuning for these kind of loads. Putting long running tasks on the main EC of your application will have almost the same bad effects of thread starvation as blocking tasks. Using blocking still wouldn't be the right thing because you exactly don't want to spawn extra threads if you are already CPU-bound. In many cases, you might want to either break down tasks into smaller pieces or run them on dedicated ECs to keep the main ECs free for asynchronous task handling.

This may or may not be true, but it's somewhat irrelevant here: these problems are well known, and well understood, well beyond the Scala ecosystem. I can take a colleague who's familiar with Node.js, Java's ThreadPoolExecutors, or Python's multiprocessing.pool and have them immediately understand the problems with thread starvation.

What I'm asking for isn't to magically solve thread starvation, what I'm asking for is to have a predictable behavior that unrelated parts of an asynchronous computation will proceed in parallel, as far as the threads allow. This was the case for the global ExecutionContext for 7 years from Jan 2013, and only changed in June 2019. We can wring our hands forever about what the "ideal" solution would be, but ExecutionContext.global was a working solution that I had put into production many times over the past 7 years, and used in countless onboarding and teaching material. I'm rather unhappy to see my 7 years of production systems, blog posts, tech talks, and a book so casually broken in the name of saving a few nanoseconds.

@jrudolph
Copy link

saving a few nanoseconds

What I wanted to say is that in the end, there are at least two different use cases for Futures: one is efficient management of asynchronous callbacks (that are usually short running) vs. using the same infrastructure for efficiently running long-running tasks. Maybe people who mostly run things with async in mind are rather glad about the performance improvements (also because they already manage the long-running tasks in special ways anyway). So, it really is a trade-off that probably went too far with the change in 2.13. With the suggested change to Future.apply, maybe the default solution is better balanced for most cases? But I wouldn't want to be the judge here, it might be that there's no solution that fits all (in which case avoiding breaking stuff might be the better choice).

We can wring our hands forever about what the "ideal" solution would be, this was a working solution

I think we need to wring out hands exactly, because things always change and we need to argue about what improvements we can make and which ones we might not want to make. Stability is a important but there must be room for improvements.

@eed3si9n
Copy link
Member

.. ExecutionContext.global was a working solution that I had put into production many times over the past 7 years, and used in countless onboarding and teaching material. I'm rather unhappy to see my 7 years of production systems, blog posts, tech talks, and a book so casually broken in the name of saving a few nanoseconds.

+1 on Haoyi here.

Making any changes to the standard library should be done in a careful way such that it won't change the semantics of production code especially for a popular feature like ExecutionContext.global. Given that the semantics can change with bringing in new ExecutionContext, would it be possible to make the BatchedExecutor an explicit opt-in instead?

@mdedetrich
Copy link

mdedetrich commented Jul 27, 2020

@lrytz I understand the issue, the point is that the new batching executor is designed to not parallelize certain tasks because doing so is slower than parallelizing them, i.e. the batching executor is meant to solve the problem where someone tries to calculate Fibonacci using Future's, in such a case you don't want such tasks to be calculated on seperate threads/pools and ideally you would to calculate the entire Fibonacci on a single thread (and also the same thread you are currently on).

This works fine assuming you use blocking correctly, yes its true that you may have less parallel tasks running but if you are doing something like

Future { 1 }.flatMap { x => Future { x + 1 } }

You don't want the x + 1 to be parallelized and to go onto another thread because its really slow to do so (and that is how the default global ExecutionContext worked prior to Scala 2.13.x) i.e.. x+1 is an incredibly fast operation so its better to just run this on the current thread. The issue is there is no way figure out what is "fast" and what is "slow" for thread blocking operations (i.e. x+1 vs Thread.sleep(1000), hence why blocking exists)

@mdedetrich
Copy link

mdedetrich commented Jul 27, 2020

The default execution context never had a “not suitable for tasks longer than 10us” warning (or however much the scheduling overhead is). If it did, this discussion would be different, but it didn’t.

This was documented in Future, see #12089 (comment) . There is an argument the documentation is unclear, hence my previous comment about linting (i.e. it being a shame that there isn't a standard Scala linter being used with such a rule to pick this up).

@lihaoyi
Copy link
Author

lihaoyi commented Jul 27, 2020

@mdedetrich that quoted documentation perfectly describes the limitations of ExecutionContext.global pre-2.13. It does not reflect the limitations since 2.13.0

@mdedetrich
Copy link

mdedetrich commented Jul 27, 2020

Sure, I said it was unclear. The critical point being made is that this limitation (as you call it) applies to Future generally, irrespective of what ExecutionContext is being used and what Scala version you are using. The design of Future makes it very clear why this is the case.

Its always that when using blocking operations with Future you should either use blocking or put the blocking computation onto a specific ExecutionContext using map/flatMap and explicitly specifying the suitable ExecutionContext (i.e. a fixed/cached thread pool).

Again its a big shame that this convention wasn't documented clearly enough.

EDIT: Even for other IO types this is the case, i.e. see https://tpolecat.github.io/doobie/docs/14-Managing-Connections.html where they recommend using a fixedThreadPool for JDBC. This is a general problem with asyc code, you get into problems with mixing long blocking code with async code (and this is irrespective of Scala's Future, even Node.js has the same problem).

@lihaoyi
Copy link
Author

lihaoyi commented Jul 27, 2020

The thing is, it was not unclear. The linked eocumentation is precise and accurate and clear as day. Just because we don’t like the documented behavior doesnt mean we get to retro-actively declare it “unclear”.

@mariogalic
Copy link

mariogalic commented Jul 27, 2020

May I suggest linking the advice regarding usage of blocking for Future(blocking(longOp)) directly at ExecutionContext scaladoc, or maybe even Future scaladoc, instead of just concurrent package scaladoc. Even from concurrent package doc it is not actually directly stated, but one must follow the link to see first concrete example at https://docs.scala-lang.org/overviews/core/futures.html#blocking-inside-a-future.

@mdedetrich
Copy link

mdedetrich commented Jul 27, 2020

The thing is, it was not unclear. The linked eocumentation is precise and accurate and clear as day.

Okay, replace unclear with misleading. The same point stands and we are kinda grasping at straws here.

Just because we don’t like the documented behavior doesnt mean we get to retro-actively declare it “unclear”.

Its not about "not liking", its about whether its intended behavior. Abstractions are designed to have an intended behavior, otherwise they lose all meaning/reasoning.

And yes I am saying that we should also update the documentation and any other materials (as much as possible) to make this ultra clear. Future does not work well when mixing blocking and non blocking operations unless you explicitly tell it (one way or another) what operations are blocking, otherwise you get problems like this. This is how its designed and coded.

As I said before, you can keep on doing what you are doing (and btw I personally don't have an issue with reverting the default ExecutionContext) but regardless if this gets changed or not you are still misusing the abstraction. Its just a very unfortunate thing that this was documented badly/incorrectly etc etc.

@sjrd
Copy link
Member

sjrd commented Jul 27, 2020

I would like to echo one of @lihaoyi's points about the documentation.

To me it was always very clear that you should wrap blocking operations with blocking {}.
It was however also very clear that you did not have to use blocking {} for long-running operations, and in fact shouldn't use it.

In other words, as long as the task keeps the CPU busy, there should be no reason to use blocking {} (if there was it would be called longRunning, not blocking), because I'm still maximizing throughput.

If I have a blocking operation, i.e., one that leaves the CPU idle, then I need blocking {} to keep maximizing throughput.

The documentation was never saying that non-blocking long-running operations should be wrapped in blocking {}. In fact, since it keeps repeating that blocking operations should be wrapped in blocking {}, and never mentions long-running operations, it is clearly telling people not to use blocking {} for non-blocking long-running operations.

Whether or not Futures were intended to require blocking {} for long-running operations is irrelevant today, because both the documentation and the implementation were in favor of not using blocking {} for long-running operations.

@lihaoyi
Copy link
Author

lihaoyi commented Jul 27, 2020

@sjrd’s comment makes me realize one more thing: it is impossible to write code using ExecutionContext.global that behaves properly in 2.12 and 2.13, no matter how we treat CPU-bound tasks (doesn’t need to be long running)

  • If we use blocking{} around CPU bound tasks, in Scala 2.13 we’ll get a thread pool explosion and have many many more threads than cores

  • If we do not use blocking{} around CPU bound tasks, in 2.13 we get zero parallelism

Breakage aside, or even the inability to easily migrate old code to preserve 2.12 parallelism behavior in 2.13 , this kind of impossible-to-make-cross-version-compatible code is another level of inconvenience

@viktorklang
Copy link

In other words, as long as the task keeps the CPU busy, there should be no reason to use blocking {} (if there was it would be called longRunning, not blocking), because I'm still maximizing throughput.

Throughput for that specific task, but you're impacting fairness negatively by excessively occupying a shared resource.
It is also important to remember that "blocking" describes a behavior, not an implementation (as it can be implemented in the VM or OS as spin-locks, back-off spin-locks, mutexes, etc). In this case the documentation should be clarified to mean "blocking progress for other tasks" (these are hard to word, because there is no universal metric of fairness.

Whether or not Futures were intended to require blocking {} for long-running operations is irrelevant today, because both the documentation and the implementation were in favor of not using blocking {} for long-running operations.

And it was also clear that global was not necessarily a good place to issue long-running operations. Ultimately documentation will never prevent all unintended uses, nor relying on runtime behaviors which are not documented—alas that doesn't mean that we shouldn't strive to always improve it.

All that said, all while it is good to discuss how documentation can be improved (it almost certainly always can be), I think it is more constructive to discuss the path forward.

I think the least disruptive option is to: introduce a batched version of global separately (it will still run on the same thread pool), so that those who are mindful about the impact of their tasks on other tasks can regain the same performance as 2.13.0-1-2 by switching to it, while restoring the <2.13 behavior for existing code. Improving the documentation for all the parts can be done as a part of this change (be clearer about expected behavior of tasks, the recommendation of blocking {} in more places, clarification of what blocking {} means, etc).

@viktorklang
Copy link

If we use blocking{} around CPU bound tasks, in Scala 2.13 we’ll get a thread pool explosion and have many many more threads than cores

@lihaoyi You can specify the number of extra threads:

scala.concurrent.context.maxExtraThreads = defaults to "256"
[..]
The maxExtraThreads is the maximum number of extra threads to have at any given time to evade deadlock, see scala.concurrent.BlockContext.

@mdedetrich
Copy link

@lihaoyi

If we use blocking{} around CPU bound tasks, in Scala 2.13 we’ll get a thread pool explosion and have many many more threads than cores

You can also specify an explicit ExecutionContext , i.e. cached thread pool would likely be appropriate for your usecase. That way you wont run out of threads (also depends on what blocking operations you are doing).

@lihaoyi
Copy link
Author

lihaoyi commented Jul 27, 2020

@viktorklang I know I can tweak maxExtraThreads, but that's still not sufficient.

My understanding is that the ideal thread pool size is num_threads = num_cores + num_io_blocking_operations. This is the semantics provided by the pre-2.13 ExecutionContext.global, and works to ensure all cores are fully utilized without having extraneous threads to add contention and context-switching overhead. This is common doctrine in any asynchronous programming environment, beyond just Scala.

This is implemented by having a default thread pool which is the size of num_cores, and asking the developer to mark blocking IO with blocking{} blocks so the runtime can correctly spin off the more threads as necessary (up to maxExtraThreads).

With the 2.13 blocking semantics, I am being told that I should use blocking{} for every CPU bound operation as well as every long IO-bound operation. That means that the runtime is unable to distinguish blocking-IO and CPU-bound operations:

  • If maxExtraThreads is high, I get num_threads > num_cores + num_io_blocking_operations, resulting in too many threads fighting for a small number of CPUs, and thus too much context switching and inefficiency

  • If maxExtraThreads is low, I end up with num_threads = num_cores < num_io_blocking_operations, and thus idle CPUs

In pre-2.13, the blocking{...} blocks were what told the runtime what num_io_blocking_operations was at any one point in time, so num_threads could dynamically adjust in response. In 2.13, the runtime no longer gets this signal: blocking{} now is used for both IO-bound and CPU-bound actions, and the runtime can no longer ensure num_threads = num_cores + num_io_blocking_operations.

Maybe this num_threads = num_cores + num_io_blocking_operations doctrine has been wrong all along, but it's served well for a long time and doesn't seem unreasonable to me.

@viktorklang
Copy link

@lihaoyi In my experience, if you're CPU-bound you want about num_threads = 0.7 x num_cores since there are GC-threads and other threads who also need to be able to get CPU-time to avoid STW (stop-the-world) pauses.

Your situation w.r.t. num_threads and num_extra_threads seems to assume that there is no other tasks which may need to run on global—generally no such guarantee can be made since that assumes a closed-world assumption, which global cannot make. (See argument previously made w.r.t. that)

the runtime can no longer ensure num_threads = num_cores + num_io_blocking_operations.

It never could, as the pool does not know whether semantic blocking is implemented with spin-locks or otherwise.
In general, since you cannot really have preemptive scheduling in user-land (especially not if you allow native code execution, see Erlang etc) the way to handle cooperative scheduling is for all parties to remain cooperative, which means avoiding "hogging" shared resources, and ideally divide longer "processes" into discrete steps to allow for other tasks to make progress.

In any case, let's try to find a path forward, so what do you think about my proposition?

@lihaoyi
Copy link
Author

lihaoyi commented Jul 27, 2020

In any case, let's try to find a path forward, so what do you think about my proposition?

A lot of things have been discussed, so I'll go through in one in turn.

  1. As mentioned earlier, I do not think special casing Future.apply is the right thing to do. It doesn't really solve the problem, and adds more complexity onto a really simple underlying model.

  2. If there's some way to preserve batching while still allowing the parallelism we were getting before, I would be all for it. But the Future internals are gnarly and I do not feel competent enough to really discuss that option. If you say it's not feasible, I have to believe you.

  3. As far as introduce a batched version of global separately, I think that's the right thing to do. Ideally I'd like there to be a half-dozen different ExecutionContext.*s that I can import and try out. We already have .global and .parasitic, adding .batching and others would fit nicely.

  4. However, due to forward-and-backwards binary compatibility requirements, I do not believe this is possible for the next few years? I recall being told the Standard Library was frozen until Scala 3. Waiting until Scala 3.1 to solve the problem (2022? 2023?) seems like too long to wait for a regression of this sort.

  5. Similarly, while I agree @alexandru's design ideas for how Futures can be improved, the standard library isn't really the place to experiment with those improvements, and anyway it isn't up for change until Scala 3.1 anyway. e.g. I have my own ideas of how we can extend scala.concurrent in a bunch of useful ways, but that kind of discussion is beyond the scope of this ticket.

  6. That leaves switching between batching and non-batching executors using an environment variable or JVM property, as @eed3si9n proposed. I think that sounds reasonable.

    • People who want to try it in some super-high-performance asynchronous system to improve performance can do so, but others would be able to migrate their 2.11/2.12 code to 2.13 without issue.
    • We already have precedence for configuring ExecutionContext.global via scala.concurrent.context.maxExtraThreads and other system properties, so this isn't unheard of.
    • This is basically a process-global version of swapping between import ExecutionContext.Implicits.global and import ExecutionContext.Implicits.batched. Not pretty, but it'll do as a stopgap until the standard library is unfrozen.
  7. In terms of documentation, as I mentioned to Matthew the documentation on how the ExecutionContext.global works is great https://docs.scala-lang.org/overviews/core/futures.html: precise, accurate, and tells me exactly how it behaves and what the limitations are. This doesn't stop us from documenting a new ExecutionContext.batching in the same way. As I mentioned above, I would prefer having a buffet of different ExecutionContexts each thoroughly documented and with clear instructions of what the different alternatives are and why you may choose each one.

That's my current understanding of our possible paths forward, feel free to correct anything that's mistaken :) I trust you guys have a better understanding of the tradeoffs than I do!

@mdedetrich
Copy link

I think thats reasonable although I personally find the JVM property configuration unnecessary. Just change the default ExecutionContext to what it was pre Scala 2.13.x but leave the batching one

@julienrf
Copy link

julienrf commented Jul 27, 2020

4. However, due to forward-and-backwards binary compatibility requirements, I do not believe this is possible for the next few years? I recall being told the Standard Library was frozen until Scala 3. Waiting until Scala 3.1 to solve the problem (2022? 2023?) seems like too long to wait for a regression of this sort.

We can put it in scala-library-compat, which only has to be backward compatible.

@lrytz
Copy link
Member

lrytz commented Jul 27, 2020

If really needed we can add special compiler support so that scala.concurrent.ExecutionContext.batching translates to some bytecode that's binary compatible with 2.13. The JVM property solution seems rather awkward.

I like the idea of using scala-library-compat, however we almost decided to rename it back to scala-collection-compat and use it only for backports, not for new 2.13 library features. But we could use this incident as the starting point for a new scala-library-future library that contains things that will become part of the next standard library.

@SethTisue SethTisue added this to the 2.13.4 milestone Jul 30, 2020
gabor-bakos-epam referenced this issue in soticsenge/maze Aug 11, 2020
@lrytz lrytz added the blocker label Sep 18, 2020
@lrytz lrytz self-assigned this Sep 18, 2020
netbsd-srcmastr pushed a commit to NetBSD/pkgsrc that referenced this issue Apr 30, 2023
Changelog (taken from https://github.com/scala/scala/releases):


Scala 2.13.10 Latest

The Scala team at Lightbend is pleased to announce the availability of Scala 2.13.10.

The following changes are highlights of this release:
Binary compatibility regression fixed

    Fix 2.13.9 regression which broke binary compatibility of case classes which are also value classes (#10155)

Library maintainers should avoid publishing libraries using Scala 2.13.9.
Other notable changes

    Fix 2.13.9 regression in linting, causing spurious "variable x is never used" warnings (#10154)
    -Xsource:3 now respects refinements by whitebox macro overrides (#10160 by @som-snytt)
    Scaladoc tool: fix parsing bug which could cause very slow performance or incorrect output (#10175 by @liang3zy22)
    Restore -Vprint-args, for echoing arguments provided to compiler (#10164 by @som-snytt)

For the complete 2.13.10 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.10 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

We especially acknowledge and thank A. P. Marki, also known as Som Snytt, who is responsible for an especially large share of the improvements in this release.

This release was brought to you by 6 contributors, according to git shortlog -sn --no-merges @ ^v2.13.9 ^2.12.x. Thank you A. P. Marki, Liang Yan, Seth Tisue, Antoine Parent, Luc Henninger, 梦境迷离.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.9

The following changes are highlights of this release:
Regression

Library maintainers should avoid publishing libraries using Scala 2.13.9. Please use 2.13.10 instead. 2.13.9 has a regression where binary-incompatible bytecode is emitted for case classes which are also value classes (case class ... extends AnyVal).
Compatibility with Scala 3

    Tasty Reader: Add support for Scala 3.2 (#10068)
    Tasty Reader: Restrict access to experimental definitions (#10020)
    To aid cross-building, accept and ignore using in method calls (#10064 by @som-snytt)
    To aid cross-building, allow ? as a wildcard even without -Xsource:3 (#9990)
    Make Scala-3-style implicit resolution explicitly opt-in rather than bundled in -Xsource:3 (#10012 by @povder)
    Prefer type of overridden member when inferring (under -Xsource:3) (#9891 by @som-snytt)

JDK version support

    Make -release more useful, deprecate -target, align with Scala 3 (#9982 by @som-snytt)
    Support JDK 19 (#10001 by @Philippus)

Warnings and lints

    Add -Wnonunit-statement to warn about discarded values in statement position (#9893 by @som-snytt)
    Make unused-import warnings easier to silence (support filtering by origin=) (#9939 by @som-snytt)
    Add -Wperformance lints for *Ref boxing and nonlocal return (#9889 by @som-snytt)

Language improvements

    Improve support for Unicode supplementary characters in identifiers and string interpolation (#9805 by @som-snytt)

Compiler options

    Use subcolon args to simplify optimizer options (#9810 by @som-snytt)
    For troubleshooting compiler, add -Vdebug-type-error (and remove -Yissue-debug) (#9824 by @tribbloid)

Security

    Error on source files with Unicode directional formatting characters (#10017)
    Prevent Function0 execution during LazyList deserialization (#10118)

Bugfixes

    Emit all bridge methods non-final (perhaps affecting serialization compat) (#9976)
    Fix null-pointer regression in Vector#prependedAll and Vector#appendedAll (#9983)
    Improve concurrent behavior of Java ConcurrentMap wrapper
    (#10027 by @igabaydulin)
    Preserve null policy in wrapped Java Maps (#10129 by @som-snytt)

Changes that shipped in Scala 2.12.16 and 2.12.17 are also included in this release.

For the complete 2.13.9 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.9 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

We especially acknowledge and thank A. P. Marki, also known as Som Snytt, who is responsible for an especially large share of the improvements in this release.

This release was brought to you by 27 contributors, according to git shortlog -sn --no-merges @ ^v2.13.8 ^2.12.x. Thank you A. P. Marki, Lukas Rytz, Seth Tisue, Jamie Thompson, Sébastien Doeraene, Scala Steward, Georgi Krastev, Jason Zaugg, Philippus, Balys Anikevicius, Gilad Hoch, NthPortal, Zhang Zhipeng, Arman Bilge, Dale Wijnand, Dominik Helm, Eric Huang, Guillaume Martres, Harrison Houghton, Krzysztof Pado, Michał Pałka, Zeeshan Arif, counter2015, jxnu-liguobin, mcallisto, naveen, philwalk.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.8

The Scala team at Lightbend is pleased to announce the availability of Scala 2.13.8.

This is a modest, incremental release focused on addressing regressions in 2.13.7.
Highlights

    Make REPL work again on Mac M1 (upgrade JLine & JNA) (#9807 by @SethTisue)
    Fix slicing of views of IndexedSeqs (including fixing 2.13.7 reverseIterator regression) (#9799 by @som-snytt)
    Fix 2.13.7 regression in implicit resolution (#9829 by @joroKr21)
    Fix 2.13.7 releaseFence regression affecting GraalVM compatibility (#9825 by @lrytz)
    Fix 2.13.7 regression affecting wildcards and F-bounded types (#9806 by @joroKr21)

A few small changes that will ship in 2.12.16 are also included in this release.

For the complete 2.13.8 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.8 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 8 contributors, according to git shortlog -sn --no-merges @ ^v2.13.7 ^2.12.x. Thank you A. P. Marki, Seth Tisue, Georgi Krastev, Jason Zaugg, Lukas Rytz, Martijn Hoekstra, Philippus Baalman, Chris Kipp.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.7

The Scala team at Lightbend is pleased to announce the availability of Scala 2.13.7.
Align with Scala 3

    Update TASTy reader to support Scala 3.1 (#9791 by @bishabosha)
    Allow import x.{*, given} under -Xsource:3 (#9724 by @smarter)
    Allow case in pattern bindings even without -Xsource:3 (#9721 by @smarter)
    Deprecate top-level wildcard type parameters (#9712 by @som-snytt)

JDK and Java compatibility

    Support JDK 18 (#9702 by @SethTisue)
    Support JDK 16 records in Java sources (#9551 by @harpocrates)
    Allow concrete private interface methods in Java sources (#9748 by @dengziming)
    Use StringConcatFactory for string concatenation on JDK 9+ (#9556 by @harpocrates)

Android compatibility

    Add ClassValueCompat to support systems without java.lang.ClassValue (such as Android) (#9752 by @nwk37011)
    For Android compatibility, make Statics.releaseFence() also catch NoSuchMethodException for java.lang.invoke.VarHandle.releaseFence() call (#9739 by @nwk37011)

Concurrency

    Fix asymmetric failure behavior of Future#{zip,zipWith,traverse,sequence} by making them fail fast regardless of ordering (#9655 by @lihaoyi)

Collections

    Make ArrayBuffer's iterator fail fast when buffer is mutated (#9258 by @NthPortal)
    Fix ArrayOps bugs (by avoiding ArraySeq#array, which does not guarantee element type) (#9641 by @som-snytt)
    Deprecate IterableOps.toIterable (#9774 by @lrytz)

Other changes

    Accept supplementary Unicode characters in identifiers (#9687 by @som-snytt)
    Improve tab completion and code assist in REPL (#9656 by @retronym)

Some small changes that will ship in 2.12.16 are also included in this release.

For the complete 2.13.7 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.7 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 25 contributors, according to git shortlog -sn --no-merges @ ^v2.13.6 ^2.12.x. Thank you A. P. Marki, Lukas Rytz, Seth Tisue, Jason Zaugg, Jamie Thompson, NthPortal, Georgi Krastev, Guillaume Martres, Dale Wijnand, Martijn Hoekstra, Alec Theriault, Rafał Sumisławski, Matt Dziuban, Li Haoyi, Doug Roper, Sébastien Doeraene, VladKopanev, danicheg, dengziming, megri, nwk37011, Magnolia.K, 梦境迷离, Mathias, James Judd.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.6

The Scala 2 team at Lightbend is pleased to announce the availability of Scala 2.13.6.
Highlights

    TASTy Reader support for Scala 3.0.0 (#9617 by @bishabosha)
    The splain compiler plugin by @tek was integrated into the compiler, available with the -Vimplicits and -Vtype-diffs flags (#7785)
    Escaped double quotes now work as expected in string interpolations, both s"\"" and s"$"" (#8830 by @eed3si9n and #9536 by @martijnhoekstra)

Other Changes

    Optimized BigInt implementation (#9628) by @denisrosset
    Support JDK15 text blocks in Java parser (#9548) by @harpocrates
    Stricter override checking for protected Scala members which override Java members (#9525) by @kynthus
    Check private[this] members in override checking (#9542)
    More accurate outer checks in patterns (#9504)
    Allow renaming imports from _root_ (#9482) by @som-snytt
    Make more annotations extend ConstantAnnotation (9336) by @BalmungSan
    A number of syntax changes were added to simplify cross-building between Scala 2 and 3
        Don't error (only warn) on symbol literals under -Xsource:3 (#9602)
        Support writing & instead of with in types under -Xsource:3 (#9594)
        Support Scala 3 vararg splice syntax under -Xsource:3 (#9584)
        Support Scala 3 wildcard and renaming imports under -Xsource:3 (#9582)
        Allow soft keywords open and infix under -Xsource:3 (#9580)
        Align leading infix operator with Scala 3 improvements (#9567)
        Support ? as wildcard marker under -Xsource:3 (#9560)
        Support case in pattern bindings under -Xsource:3 (#9558)
        Parse +_ and -_ in types as identifiers under -Xsource:3 to support Scala 3.2 placeholder syntax (#9605)

Some small changes that will ship in 2.12.14 are also included in this release.

For the complete 2.13.6 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.6 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 25 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.5 ^2.12.x. Thank you A. P. Marki, Lukas Rytz, Dale Wijnand, Jamie Thompson, Seth Tisue, 梦境迷离, Guillaume Martres, Martijn Hoekstra, Denis Rosset, Aaron S. Hawley, Kai, Eugene Yokota, Jason Zaugg, Anatolii Kmetiuk, Ikko Ashimine, superseeker13, Eugene Platonov, Diego E. Alonso Blas, Filipe Regadas, Hatano Yuusuke, Luis Miguel Mejía Suárez, Rafał Sumisławski, Alec Theriault, Tom Grigg, Torsten Schmits.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.5
Scala 2.13.5

The Scala 2 team at Lightbend is pleased to announce the availability of Scala 2.13.5.
Highlights

    TASTy reader: add support for Scala 3.0.0-RC1 (#9501, #9394, #9357) — thank you @bishabosha!
    Allow name-based extractors to be irrefutable (#9343) — thank you @martijnhoekstra!
    Upgrade to ASM 9.1, for JDK 16 and 17 support in the optimizer (#9489, #9480)

Other changes

    Assorted improvements to exhaustivity checking in pattern matching (#9479, #9472, #9474, #9313, #9462)
    Assorted improvements to handling of higher-kinded types, aligning with Scala 3 (#9400, #9404, #9405, #9414, #9417, #9439) — thank you @joroKr21!
    Make -target support JVM 13, 14, 15, 16, and 17 (#9489, #9481)
    Omit @nowarn annotations from generated code, for forwards compatibility at compile-time (#9491)
    Add linting of unused context bounds (via -Wunused:synthetics or -Wunused:params) (#9346) — thank you @som-snytt!
    Lift artificial restrictions on ConstantAnnotations (#9379)
    Make Java Map wrappers handle nulls according to put/remove contract (#9344) — thank you @som-snytt!
    Make language specification available as a PDF (#7432) — thank you @sake92!

Some small changes that will ship in 2.12.14 are also included in this release.

For complete 2.13.5 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.5 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 23 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.4 ^2.12.x. Thank you Seth Tisue, A. P. Marki, Dale Wijnand, NthPortal, Jamie Thompson, Lukas Rytz, Martijn Hoekstra, Georgi Krastev, Jason Zaugg, Jasper Moeys, Sakib Hadziavdic, Anatolii Kmetiuk, Arnaud Gourlay, Marcono1234, Chia-Ping Tsai, Mike Skells, Stefan Zeiger, Waleed Khan, Yann Bolliger, Guillaume Martres, 梦境迷离, Ethan Atkins, Darcy Shen.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.4

Scala 2.13.4:

    Restores default global ExecutionContext to 2.12 behavior
    Improves pattern matching, especially in exhaustivity checking
    Adds experimental support for consuming some libraries built by Scala 3

and more! Details below.
Concurrency

NOTE The following change affects parallelism and performance. If you use scala.concurrent.ExecutionContext.global you may
want to adapt your code. (But note that Akka is unaffected, because it uses its own execution contexts.)

In 2.13.0 we made ExecutionContext.global "opportunistic". This enabled "batching" of nested tasks
to execute on the same thread, avoiding an expensive context switch. That strategy requires
user code to wrap long-running and/or blocking tasks with blocking { ... } to maintain parallel
execution.

For 2.13.4, we restore 2.12's default non-batching behavior, which is safer for arbitrary user code. Users wanting
increased performance may override the default, if they believe their code uses blocking correctly.
We make that choice available via ExecutionContext.opportunistic.

Using ExecutionContext.opportunistic requires a bit of extra boilerplate, made necessary by binary
compatibility constraints on the standard library. Detailed instructions are in
ExecutionContext.global's Scaladoc.

Further detail: #9270/#9296/scala/bug#12089,
Pattern matching

The pattern matcher is now much better at warning you if a match isn't exhaustive.

The following types of matches no longer disable exhaustivity checking:

    guards (case <pattern> if <condition> => ...) #9140
    custom extractors (user-defined unapply or unapplySeq) #9140/#9162
    unsealed types, if you opt in via -Xlint or -Xlint:strict-unsealed-patmat #9140/#9299

Additionally,

    private classes are now treated as if sealed #9211
    singleton types no longer prematurely widen #9209
    tuples are handled properly #9147/#9163/#9147

New warnings reported can be resolved by:

    adding any missing cases
    in the case of complementary guards (e.g. if n > 0 and if n <= 0) by dropping the last guard
    for custom extractors: demarking irrefutable extractors as such, by defining the return type as Some
    for sealed types: marking traits or parent classes sealed, parent classes abstract, and classes final
    explicitly declaring the default case: case x => throw new MatchError(x)

Otherwise, your options for suppressing warnings include:

    annotate the scrutinee with @unchecked, such as (foo: @unchecked) match { ... }
    disable exhaustivity checking in the presence of guards and custom extractors with -Xnon-strict-patmat-analysis
    disable exhaustivity checking of unsealed types with -Xlint:-strict-unsealed-patmat
    use -Wconf to suppress the warnings globally, with e.g. -Wconf:msg=match may not be exhaustive:i

Scala 3 interop

This release enables the Scala 2 compiler to consume some libraries built in Scala 3. #9109/#9293

The new capability is experimental. To enable it, add -Ytasty-reader to your compiler options.

Not all Scala 3 built libraries are supported, because not all Scala 3 features can be supported.
The library author must stay within the supported subset.

For more details and caveats see the blog post Forward Compatibility for the Scala 3 Transition.
Standard library changes

    When compiling on JDK 15, avoid clash with new CharSequence#isEmpty method #9292
        The clash is avoided by making CharSequence wrappers in Predef non-implicit.
        The change is binary compatible, but not source compatible. Call sites may need updating.
    Make LazyList.cons.apply lazier #9095
    Make MapView#values preserve laziness #9090
    Make ListBuffer's iterator fail when the buffer is mutated #9174
    Un-deprecate useful StringOps methods, despite Unicode concerns #9246

Compiler changes

    Allow using classOf with object type (e.g. classOf[Foo.type]) #9279
    Fix back-quoted constructor params with identical prefixes #9008
    Enable range positions (-Yrangepos) by default #9146

Other changes

Some changes that will also ship in 2.12.13 are also included in this release, most notably:

    When compiling on JDK 15, avoid clash with new CharSequence#isEmpty method #9292
        To avoid the clash, implicit was removed from Predef's implicit conversions to SeqCharSequence and ArrayCharSequence.
        This change is binary compatible, but not source compatible. User code may need updating. See PR for details.

For complete 2.13.4 change lists, see all merged PRs and all closed bugs.
Compatibility

As usual for our minor releases, Scala 2.13.4 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 40 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.3 ^2.12.x. Thank you Jamie Thompson, Dale Wijnand, A. P. Marki, NthPortal, Lukas Rytz, Seth Tisue, Jason Zaugg, Georgi Krastev, Eugene Yokota, Martijn Hoekstra, Trey Cahill, Rado Buransky, Ergys Dona, Mike Skells, Greg Pfeil, Kazuhiro Sera, Mitsuhiro Shibuya, NagaChaitanya Vellanki, Sergei Petunin, Sébastien Doeraene, Takahashi Osamu, Viktor Klang, mwielocha, Nicolas Stucki, Jan Arne Sparka, Philippus Baalman, Glenn Liwanag, Rafał Sumisławski, Renato Cavalcanti, Sergei, nooberfsh, Dmitrii Naumenko, Simão Martins, counter2015, Jian Lan, Liu Fengyun, Kanishka, Julien Richard-Foy, Janek Bogucki, Björn Regnell.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.3

Scala 2.13.3 is primarily a bugfix release.

It also includes:

    improvements to warnings and linting
    experimental -Xasync support

For more detail, read on.
Behavior changes

    Symbol#toString is now Symbol(foo) instead of the deprecated single-quote form 'foo (#8933)

Bugfixes

    Fix 2.13-only bug in Java collection converters that caused some operations to perform an extra pass (#9058)
    Fix 2.13.2 performance regression in Vector: restore special cases for small operands in appendedAll and prependedAll (#9036)
    Increase laziness of #:: for LazyList (#8985)
    Allow trailing backslash in string interpolators (#8942)
    Respect @uncheckedVariance in higher-kinded types (fixing 2.13.2 regression) (#8938)

Warnings and linting

    Deprecate auto-application of (non-Java-defined) methods with a single empty parameter list (#8833)
        The PR has instructions for suppressing the warning if it is unwanted
    Warn by default on mismatch of presence/absence of an empty parameter list when overriding (#8846)
        -Xlint:nullary-override is no longer accepted, since this now warns by default
    Discourage multi-argument infix syntax: lint applications (x op (a, b)), also lint operator-name definitions (#8951)
    Fix @nowarn to use correct semantics for & (#9032)
    Make -Wunused:imports work again even when -Ymacro-annotations is enabled (#8962)
    Replace -Wself-implicit with -Xlint:implicit-recursion (#9019)
    Under -Xsource:3, disallow auto-eta-expansion of SAMs (#9049)

Experimental -Xasync

This successor to scala-async allows usage with other effect systems besides scala.concurrrent.Future.

    Compiler support for scala-async; enable with -Xasync (#8816)

We will publish a blog post with more detail on this work by @retronym, building on his earlier collaboration with @phaller. In the meantime, see the PR description.

This feature will also be included in the 2.12.12 release.
Other changes

For complete 2.13.3 change lists, see all merged PRs and all closed bugs.

Some changes that will ship in 2.12.12 are also included in this release, most notably:

    Annotation parsing & @deprecated (#8781)
    Fix Scaladoc tool on JDK 11 with -release 8: exclude sig files in Symbol#sourceFile (#8849)

Compatibility

As usual for our minor releases, Scala 2.13.3 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration during upgrade to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 28 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.2 ^2.12.x. Thank you A. P. Marki, Jason Zaugg, Seth Tisue, Dale Wijnand, Lukas Rytz, Georgi Krastev, David Barri, Eugene Yokota, Diego E. Alonso Blas, Akhtiam Sakaev, Glenn Liwanag, changvvb, Evgeny Ganchurin, Mike Skells, Martijn Hoekstra, yudedako, Anatolii Kmetiuk, Gilles Peiffer, JyotiSachdeva.ext, Karol Chmist, Kenji Yoshida, Lorenzo Costanzia di Costigliole, NthPortal, Steven Barnes, Sébastien Doeraene, Travis Brown, counter2015, nogurenn.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.2

Scala 2.13.2 has:

    a brand-new Vector implementation
    configurable warnings
    an improved REPL (now JLine 3 based)
    bugfixes and more

Vector

    Rewrite Vector (using "radix-balanced finger tree vectors"), for performance (#8534)

Small vectors are now more compactly represented. Some operations are now drastically faster on large vectors. A few operations may be a little slower.

Kudos to @szeiger for this work.
Configurable warnings

    Add -Wconf flag for configurable warnings, @nowarn annotation for local suppression (#8373)

Note that scala-collection-compat 2.1.6 (or newer) provides @nowarn for cross-built projects (as a no-op on 2.11 and 2.12).

Special thanks to Roman Janusz (@ghik), whose silencer plugin was the basis for this work.
REPL improvements

    REPL: upgrade to JLine 3 (benefits include multi-line editing) (#8036)
    Default true -Yrepl-class-based and -Yuse-magic-imports (#8748)
        -Yrepl-class-based avoids deadlocks
        -Yuse-magic-imports improves performance for long sessions
    Improve REPL display of method types (#8319)

Special thanks to @som-snytt for spearheading the JLine 3 upgrade.

We are tracking JLine-related improvements and regressions here. There some known regressions in some less-vital features and behaviors; we plan to address these in future 2.13.x releases.
Language changes

    Unicode escapes are now ordinary escape sequences (not processed early) (#8282)

Compiler fixes

    Plug many variance holes (in higher-kinded types, refined types, and private inner classes) (#8545)
    Fix variance handling for parameterized type aliases (#8651)
    Exclude universal members (getClass, toString, etc) from root module import (#8541)
    Matching strings makes switches in bytecode (#8451)

Deprecations

    Deprecate eta-expansion, via trailing underscore, of methods with no argument lists (#8836)
    Deprecate nested class shadowing in "override" position (#8705)
    Deprecate numeric conversions that lose precision (e.g., Long to Double) (#8679)
    Deprecate numeric widening of numeric literals which are not representable with Float/Double (#8757)
    Deprecate old-style constructor syntax (#8591)

Improvements from the future

    There is no more -Xsource:2.14, only -Xsource:3 (#8812)
    Allow infix operators at start of line (under -Xsource:3) (#8419)
    Case class copy and apply inherit access modifiers from constructor (under -Xsource:3) (#7702)

Other fixes and improvements

    Un-deprecate default floating point Orderings; issue migration warning instead under -Xmigration (#8721)
    Support macro annotation expansions in -Wmacros:MODE (#8799)
    Scaladoc can now link to Javadoc for the Java standard library for JDK versions 9 and up (overridable with new -jdk-api-doc-base flag) (#8663)
    sys.env now throws on null environment variable (#8579)
    Make the hashcode method ## have no parameter list (instead of a single empty one) (#8814)

This is not a complete list of changes. For that, see all merged PRs and all closed bugs.

2.13.2 also includes the changes in Scala 2.12.11, most notably:

    Make optimizer work on JDK 13+ (#8676).

Compatibility

As usual for our minor releases, Scala 2.13.2 is binary-compatible with the whole Scala 2.13 series.

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation, spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 45 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.1 ^2.12.x. Thank you Som Snytt, Jason Zaugg, Lukas Rytz, Dale Wijnand, Seth Tisue, Diego E. Alonso Blas, Georgi Krastev, Martijn Hoekstra, Eugene Yokota, Harrison Houghton, Stefan Zeiger, NthPortal, Anatolii, Linas Medžiūnas, Aaron S. Hawley, Guillaume Martres, Josh Lemer, Sébastien Doeraene, Jasper Moeys, Julien Truffaut, Oskar Haarklou Veileborg, Lucas Cardoso, Andrew Valencik, Adriaan Moors, yudedako, Steven Barnes, Brian Wignall, Ausmarton Zarino Fernandes, Oguz Albayrak, Philippus, Viktor Klang, Yang Bo, bnyu, psilospore, sinanspd, wholock, Jamie Thompson, Hamza Meknassi, Janek Bogucki, Flash Sheridan, Fabian Page, Kenji Yoshida, Denis Rosset, Lucas S Cardoso, Chris Birchall.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central



Scala 2.13.1

Scala 2.13.1 is primarily a bug fix release that fixes several regressions in 2.13.0.
Collection-related regressions

    Revert Stream.Cons to the 2.12 encoding (#8354)
    Don't rebuild scala.Seq to drop elems in unapplySeq (#8340)
    Blacken subtrees where necessary in RedBlackTree.take (#8287)
    Iterator#flatMap#hasNext calls outer#hasNext 1 time, not 2-3 times (#8220)
    s.c.Map#values returns a strict Iterable rather than a View (#8195)
    Vector.from(ArraySeq) copies elems rather than reusing unsafeArray (#8194)
    Fix mutable.HashSet.addAll: remove redundant call to super method (#8192)
    Fix mutable.ArraySeq.ofChar#addString (#8176)
    Fix HashMap#mapValuesInPlace (#8421)

Other regressions

    Avoid spurious "illegal cyclic reference" errors (#8382)
    Stabilize args of apply (#8202)
    Reject incomplete implicit dictionaries (#8201)
    Process exit code on script errors (#8169)
    Fix type inference involving wildcards (#8129)

Other bug fixes and improvements

    Extend the Gradle / sbt 0.13 leniency to Windows (#8408)
    Avoid unnecessary toSeq conversions in Seq methods (#8374)
    Avoid memory leaks in Stream methods (#8367)
    Precompile -i files for script runner (#8349)
    Stop warning on higher-kinded usage without -language:higherKinds (#8348)
    Simplify reporters (#8338)
    More efficient ArraySeq iteration (#8300)
    Enable hyperlinking to Java docs (#8284)
    Parent implicitNotFound message is supplemental (#8280)
    Add protected and private visibility filters to scaladoc (#8183)
    Fix vulnerability in jQuery used in ScalaDoc (#8179)
    Synthesize a PartialFunction from function literal (#8172)
    Fix parsing of try (#8071)
    Support emitting Java 9 bytecode by adding "-target:9" (#8060)
    Deprecate mutable.MultiMap (#8005)
    Add syntactic sugar for if(_) (#7707)
    A foreign definition induces ambiguity (#7609)

This is not a complete list of changes. For that, see all merged PRs and all closed bugs.
Compatibility

Upgrading from 2.12? Enable -Xmigration while upgrading to request migration advice from the compiler.

As usual for our minor releases, Scala 2.13.1 is binary-compatible with the whole Scala 2.13 series.
Contributors

A big thank you to everyone who's helped improve Scala by reporting bugs, improving our documentation,
spreading kindness in discussions around Scala, and submitting and reviewing pull requests! You are all magnificent.

This release was brought to you by 43 contributors, according to git shortlog -sn --no-merges HEAD ^v2.13.0 ^upstream/2.12.x. Thank you Som Snytt, Lukas Rytz, Aaron S. Hawley, exoego, Jason Zaugg, Dale Wijnand, Seth Tisue, Stefan Zeiger, NthPortal, Martijn Hoekstra, Jasper Moeys, Josh Lemer, Isaac Levy, Harrison Houghton, Benjamin Kurczyk, redscarf, 杨博 (Yang Bo), Adriaan Moors, Anatolii Kmetiuk, Eugene Yokota, Georgi Krastev, Miles Sabin, Philippus, xuwei-k, Magnolia.K, Mike Skells, 2efPer, Mitesh Aghera, NomadBlacky, Guillaume Martres, Odd Möller, yui-knk, Georg, Flash Sheridan, Diego E. Alonso Blas, Sébastien Doeraene, Atsushi Araki, psilospore, Akhtyam Sakaev, wanying.chan, Li Haoyi, M.Shibuya, Kota Mizushima.

Thanks to Lightbend for their continued sponsorship of the Scala core team’s efforts. Lightbend offers commercial support for Scala.
Scala 2.13 notes

The release notes for Scala 2.13.0 have important information applicable to the whole 2.13 series.
Obtaining Scala

Scala releases are available through a variety of channels, including (but not limited to):

    Bump the scalaVersion setting in your sbt-based project
    Download a distribution from scala-lang.org
    Obtain JARs via Maven Central
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.