diff --git a/build.sbt b/build.sbt index ddcd70bc45..220ac30514 100644 --- a/build.sbt +++ b/build.sbt @@ -312,9 +312,7 @@ lazy val core = crossProject(JSPlatform, JVMPlatform) ProblemFilters.exclude[MissingClassProblem]("cats.effect.SyncIO$Delay"), ProblemFilters.exclude[DirectMissingMethodProblem]("cats.effect.IO#IOCont.apply"), ProblemFilters.exclude[DirectMissingMethodProblem]("cats.effect.IO#IOCont.copy"), - ProblemFilters.exclude[DirectMissingMethodProblem]("cats.effect.IO#IOCont.this") - ) - ) + ProblemFilters.exclude[DirectMissingMethodProblem]("cats.effect.IO#IOCont.this"))) .jvmSettings( javacOptions ++= Seq("-source", "1.8", "-target", "1.8") ) diff --git a/core/js/src/main/scala/cats/effect/IOApp.scala b/core/js/src/main/scala/cats/effect/IOApp.scala index 62e1b2a2f2..3878088ceb 100644 --- a/core/js/src/main/scala/cats/effect/IOApp.scala +++ b/core/js/src/main/scala/cats/effect/IOApp.scala @@ -20,13 +20,169 @@ import scala.concurrent.CancellationException import scala.concurrent.duration._ import scala.scalajs.js +/** + * The primary entry point to a Cats Effect application. Extend this + * trait rather than defining your own `main` method. This avoids the + * need to run [[IO.unsafeRunSync]] (or similar) on your own. + * + * `IOApp` takes care of the messy details of properly setting up + * (and tearing down) the [[unsafe.IORuntime]] needed to run the [[IO]] + * which represents your application. All of the associated thread + * pools (if relevant) will be configured with the assumption that + * your application is fully contained within the `IO` produced by + * the [[run]] method. Note that the exact details of how the runtime + * will be configured are very platform-specific. Part of the point + * of `IOApp` is to insulate users from the details of the underlying + * runtime (whether JVM or JavaScript). + * + * {{{ + * object MyApplication extends IOApp { + * def run(args: List[String]) = + * for { + * _ <- IO.print("Enter your name: ") + * name <- IO.readln + * _ <- IO.println("Hello, " + name) + * } yield ExitCode.Success + * } + * }}} + * + * In the above example, `MyApplication` will be a runnable class with + * a `main` method, visible to Sbt, IntelliJ, or plain-old `java`. When + * run externally, it will print, read, and print in the obvious way, + * producing a final process exit code of 0. Any exceptions thrown within + * the `IO` will be printed to standard error and the exit code will be + * set to 1. In the event that the main [[Fiber]] (represented by the `IO` + * returned by `run`) is canceled, the runtime will produce an exit code of 1. + * + * Note that exit codes are an implementation-specific feature of the + * underlying runtime, as are process arguments. Naturally, all JVMs + * support these functions, as does NodeJS, but some JavaScript execution + * environments will be unable to replicate these features (or they simply + * may not make sense). In such cases, exit codes may be ignored and/or + * argument lists may be empty. + * + * Note that in the case of the above example, we would actually be + * better off using [[IOApp.Simple]] rather than `IOApp` directly, since + * we are neither using `args` nor are we explicitly producing a custom + * [[ExitCode]]: + * + * {{{ + * object MyApplication extends IOApp.Simple { + * val run = + * for { + * _ <- IO.print("Enter your name: ") + * name <- IO.readln + * _ <- IO.println(s"Hello, " + name) + * } yield () + * } + * }}} + * + * It is valid to define `val run` rather than `def run` because `IO`'s + * evaluation is lazy: it will only run when the `main` method is + * invoked by the runtime. + * + * In the event that the process receives an interrupt signal (`SIGINT`) due + * to Ctrl-C (or any other mechanism), it will immediately `cancel` the main + * fiber. Assuming this fiber is not within an `uncancelable` region, this + * will result in interrupting any current activities and immediately invoking + * any finalizers (see: [[IO.onCancel]] and [[IO.bracket]]). The process will + * not shut down until the finalizers have completed. For example: + * + * {{{ + * object InterruptExample extends IOApp.Simple { + * val run = + * IO.bracket(startServer)( + * _ => IO.never)( + * server => IO.println("shutting down") *> server.close) + * } + * }}} + * + * If we assume the `startServer` function has type `IO[Server]` (or similar), + * this kind of pattern is very common. When this process receives a `SIGINT`, + * it will immediately print "shutting down" and run the `server.close` effect. + * + * One consequence of this design is it is possible to build applications which + * will ignore process interrupts. For example, if `server.close` runs forever, + * the process will ignore interrupts and will need to be cleaned up using + * `SIGKILL` (i.e. `kill -9`). This same phenomenon can be demonstrated by using + * [[IO.uncancelable]] to suppress all interruption within the application + * itself: + * + * {{{ + * object Zombie extends IOApp.Simple { + * val run = IO.never.uncancelable + * } + * }}} + * + * The above process will run forever and ignore all interrupts. The only way + * it will shut down is if it receives `SIGKILL`. + * + * It is possible (though not necessary) to override various platform-specific + * runtime configuration options, such as `computeWorkerThreadCount` (which only + * exists on the JVM). Please note that the default configurations have been + * extensively benchmarked and are optimal (or close to it) in most conventional + * scenarios. + * + * However, with that said, there really is no substitute to benchmarking your + * own application. Every application and scenario is unique, and you will + * always get the absolute best results by performing your own tuning rather + * than trusting someone else's defaults. `IOApp`'s defaults are very ''good'', + * but they are not perfect in all cases. One common example of this is + * applications which maintain network or file I/O worker threads which are + * under heavy load in steady-state operations. In such a performance profile, + * it is usually better to reduce the number of compute worker threads to + * "make room" for the I/O workers, such that they all sum to the number of + * physical threads exposed by the kernel. + * + * @see [[IO]] + * @see [[run]] + * @see [[ResourceApp]] + * @see [[IOApp.Simple]] + */ trait IOApp { private[this] var _runtime: unsafe.IORuntime = null + /** + * The runtime which will be used by `IOApp` to evaluate the + * [[IO]] produced by the `run` method. This may be overridden + * by `IOApp` implementations which have extremely specialized + * needs, but this is highly unlikely to ever be truly needed. + * As an example, if an application wishes to make use of an + * alternative compute thread pool (such as `Executors.fixedThreadPool`), + * it is almost always better to leverage [[IO.evalOn]] on the value + * produced by the `run` method, rather than directly overriding + * `runtime`. + * + * In other words, this method is made available to users, but its + * use is strongly discouraged in favor of other, more precise + * solutions to specific use-cases. + * + * This value is guaranteed to be equal to [[unsafe.IORuntime.global]]. + */ protected def runtime: unsafe.IORuntime = _runtime + + /** + * The configuration used to initialize the [[runtime]] which will + * evaluate the [[IO]] produced by `run`. It is very unlikely that + * users will need to override this method. + */ protected def runtimeConfig: unsafe.IORuntimeConfig = unsafe.IORuntimeConfig() + /** + * The entry point for your application. Will be called by the runtime + * when the process is started. If the underlying runtime supports it, + * any arguments passed to the process will be made available in the + * `args` parameter. The numeric value within the resulting [[ExitCode]] + * will be used as the exit code when the process terminates unless + * terminated exceptionally or by interrupt. + * + * @param args The arguments passed to the process, if supported by the + * underlying runtime. For example, `java com.company.MyApp --foo --bar baz` + * or `node com-mycompany-fastopt.js --foo --bar baz` would each + * result in `List("--foo", "--bar", "baz")`. + * @see [[IOApp.Simple!.run:cats\.effect\.IO[Unit]*]] + */ def run(args: List[String]): IO[ExitCode] final def main(args: Array[String]): Unit = { @@ -92,9 +248,15 @@ trait IOApp { object IOApp { + /** + * A simplified version of [[IOApp]] for applications which ignore their + * process arguments and always produces [[ExitCode.Success]] (unless + * terminated exceptionally or interrupted). + * + * @see [[IOApp]] + */ trait Simple extends IOApp { def run: IO[Unit] final def run(args: List[String]): IO[ExitCode] = run.as(ExitCode.Success) } - } diff --git a/core/jvm/src/main/scala/cats/effect/IOApp.scala b/core/jvm/src/main/scala/cats/effect/IOApp.scala index 5f6ca0976a..ab6d0b16de 100644 --- a/core/jvm/src/main/scala/cats/effect/IOApp.scala +++ b/core/jvm/src/main/scala/cats/effect/IOApp.scala @@ -20,16 +20,193 @@ import scala.concurrent.{blocking, CancellationException} import java.util.concurrent.CountDownLatch +/** + * The primary entry point to a Cats Effect application. Extend this + * trait rather than defining your own `main` method. This avoids the + * need to run [[IO.unsafeRunSync]] (or similar) on your own. + * + * `IOApp` takes care of the messy details of properly setting up + * (and tearing down) the [[unsafe.IORuntime]] needed to run the [[IO]] + * which represents your application. All of the associated thread + * pools (if relevant) will be configured with the assumption that + * your application is fully contained within the `IO` produced by + * the [[run]] method. Note that the exact details of how the runtime + * will be configured are very platform-specific. Part of the point + * of `IOApp` is to insulate users from the details of the underlying + * runtime (whether JVM or JavaScript). + * + * {{{ + * object MyApplication extends IOApp { + * def run(args: List[String]) = + * for { + * _ <- IO.print("Enter your name: ") + * name <- IO.readln + * _ <- IO.println("Hello, " + name) + * } yield ExitCode.Success + * } + * }}} + * + * In the above example, `MyApplication` will be a runnable class with + * a `main` method, visible to Sbt, IntelliJ, or plain-old `java`. When + * run externally, it will print, read, and print in the obvious way, + * producing a final process exit code of 0. Any exceptions thrown within + * the `IO` will be printed to standard error and the exit code will be + * set to 1. In the event that the main [[Fiber]] (represented by the `IO` + * returned by `run`) is canceled, the runtime will produce an exit code of 1. + * + * Note that exit codes are an implementation-specific feature of the + * underlying runtime, as are process arguments. Naturally, all JVMs + * support these functions, as does NodeJS, but some JavaScript execution + * environments will be unable to replicate these features (or they simply + * may not make sense). In such cases, exit codes may be ignored and/or + * argument lists may be empty. + * + * Note that in the case of the above example, we would actually be + * better off using [[IOApp.Simple]] rather than `IOApp` directly, since + * we are neither using `args` nor are we explicitly producing a custom + * [[ExitCode]]: + * + * {{{ + * object MyApplication extends IOApp.Simple { + * val run = + * for { + * _ <- IO.print("Enter your name: ") + * name <- IO.readln + * _ <- IO.println(s"Hello, " + name) + * } yield () + * } + * }}} + * + * It is valid to define `val run` rather than `def run` because `IO`'s + * evaluation is lazy: it will only run when the `main` method is + * invoked by the runtime. + * + * In the event that the process receives an interrupt signal (`SIGINT`) due + * to Ctrl-C (or any other mechanism), it will immediately `cancel` the main + * fiber. Assuming this fiber is not within an `uncancelable` region, this + * will result in interrupting any current activities and immediately invoking + * any finalizers (see: [[IO.onCancel]] and [[IO.bracket]]). The process will + * not shut down until the finalizers have completed. For example: + * + * {{{ + * object InterruptExample extends IOApp.Simple { + * val run = + * IO.bracket(startServer)( + * _ => IO.never)( + * server => IO.println("shutting down") *> server.close) + * } + * }}} + * + * If we assume the `startServer` function has type `IO[Server]` (or similar), + * this kind of pattern is very common. When this process receives a `SIGINT`, + * it will immediately print "shutting down" and run the `server.close` effect. + * + * One consequence of this design is it is possible to build applications which + * will ignore process interrupts. For example, if `server.close` runs forever, + * the process will ignore interrupts and will need to be cleaned up using + * `SIGKILL` (i.e. `kill -9`). This same phenomenon can be demonstrated by using + * [[IO.uncancelable]] to suppress all interruption within the application + * itself: + * + * {{{ + * object Zombie extends IOApp.Simple { + * val run = IO.never.uncancelable + * } + * }}} + * + * The above process will run forever and ignore all interrupts. The only way + * it will shut down is if it receives `SIGKILL`. + * + * It is possible (though not necessary) to override various platform-specific + * runtime configuration options, such as `computeWorkerThreadCount` (which only + * exists on the JVM). Please note that the default configurations have been + * extensively benchmarked and are optimal (or close to it) in most conventional + * scenarios. + * + * However, with that said, there really is no substitute to benchmarking your + * own application. Every application and scenario is unique, and you will + * always get the absolute best results by performing your own tuning rather + * than trusting someone else's defaults. `IOApp`'s defaults are very ''good'', + * but they are not perfect in all cases. One common example of this is + * applications which maintain network or file I/O worker threads which are + * under heavy load in steady-state operations. In such a performance profile, + * it is usually better to reduce the number of compute worker threads to + * "make room" for the I/O workers, such that they all sum to the number of + * physical threads exposed by the kernel. + * + * @see [[IO]] + * @see [[run]] + * @see [[ResourceApp]] + * @see [[IOApp.Simple]] + */ trait IOApp { private[this] var _runtime: unsafe.IORuntime = null + + /** + * The runtime which will be used by `IOApp` to evaluate the + * [[IO]] produced by the `run` method. This may be overridden + * by `IOApp` implementations which have extremely specialized + * needs, but this is highly unlikely to ever be truly needed. + * As an example, if an application wishes to make use of an + * alternative compute thread pool (such as `Executors.fixedThreadPool`), + * it is almost always better to leverage [[IO.evalOn]] on the value + * produced by the `run` method, rather than directly overriding + * `runtime`. + * + * In other words, this method is made available to users, but its + * use is strongly discouraged in favor of other, more precise + * solutions to specific use-cases. + * + * This value is guaranteed to be equal to [[unsafe.IORuntime.global]]. + */ protected def runtime: unsafe.IORuntime = _runtime + /** + * The configuration used to initialize the [[runtime]] which will + * evaluate the [[IO]] produced by `run`. It is very unlikely that + * users will need to override this method. + */ protected def runtimeConfig: unsafe.IORuntimeConfig = unsafe.IORuntimeConfig() + /** + * Controls the number of worker threads which will be allocated to + * the compute pool in the underlying runtime. In general, this should be + * no ''greater'' than the number of physical threads made available by + * the underlying kernel (which can be determined using + * `Runtime.getRuntime().availableProcessors()`). For any application + * which has significant additional non-compute thread utilization (such + * as asynchronous I/O worker threads), it may be optimal to reduce the + * number of compute threads by the corresponding amount such that the + * total number of active threads exactly matches the number of underlying + * physical threads. + * + * In practice, tuning this parameter is unlikely to affect your application + * performance beyond a few percentage points, and the default value is + * optimal (or close to optimal) in ''most'' common scenarios. + * + * '''This setting is JVM-specific and will not compile on JavaScript.''' + * + * For more details on Cats Effect's runtime threading model please see + * [[https://typelevel.org/cats-effect/docs/thread-model]]. + */ protected def computeWorkerThreadCount: Int = Math.max(2, Runtime.getRuntime().availableProcessors()) + /** + * The entry point for your application. Will be called by the runtime + * when the process is started. If the underlying runtime supports it, + * any arguments passed to the process will be made available in the + * `args` parameter. The numeric value within the resulting [[ExitCode]] + * will be used as the exit code when the process terminates unless + * terminated exceptionally or by interrupt. + * + * @param args The arguments passed to the process, if supported by the + * underlying runtime. For example, `java com.company.MyApp --foo --bar baz` + * or `node com-mycompany-fastopt.js --foo --bar baz` would each + * result in `List("--foo", "--bar", "baz")`. + * @see [[IOApp.Simple!.run:cats\.effect\.IO[Unit]*]] + */ def run(args: List[String]): IO[ExitCode] final def main(args: Array[String]): Unit = { @@ -139,14 +316,19 @@ trait IOApp { Thread.currentThread().interrupt() } } - } object IOApp { + /** + * A simplified version of [[IOApp]] for applications which ignore their + * process arguments and always produces [[ExitCode.Success]] (unless + * terminated exceptionally or interrupted). + * + * @see [[IOApp]] + */ trait Simple extends IOApp { def run: IO[Unit] final def run(args: List[String]): IO[ExitCode] = run.as(ExitCode.Success) } - } diff --git a/core/shared/src/main/scala/cats/effect/IO.scala b/core/shared/src/main/scala/cats/effect/IO.scala index 1ab5a89d08..120ccc8de0 100644 --- a/core/shared/src/main/scala/cats/effect/IO.scala +++ b/core/shared/src/main/scala/cats/effect/IO.scala @@ -103,6 +103,8 @@ import scala.util.{Failure, Success, Try} * IO.pure(a) * } * }}} + * + * @see [[IOApp]] for the preferred way of executing whole programs wrapped in `IO` */ sealed abstract class IO[+A] private () extends IOPlatform[A] { @@ -118,12 +120,23 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { /** * Runs the current IO, then runs the parameter, keeping its result. - * The result of the first action is ignored. - * If the source fails, the other action won't run. + * The result of the first action is ignored. If the source fails, + * the other action won't run. Not suitable for use when the parameter + * is a recursive reference to the current expression. + * + * @see [[>>]] for the recursion-safe, lazily evaluated alternative */ def *>[B](that: IO[B]): IO[B] = productR(that) + /** + * Runs the current IO, then runs the parameter, keeping its result. + * The result of the first action is ignored. + * If the source fails, the other action won't run. Evaluation of the + * parameter is done lazily, making this suitable for recursion. + * + * @see [*>] for the strictly evaluated alternative + */ def >>[B](that: => IO[B]): IO[B] = flatMap(_ => that) @@ -170,6 +183,16 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { def option: IO[Option[A]] = redeem(_ => None, Some(_)) + /** + * Runs the current and given IO in parallel, producing the pair of + * the outcomes. Both outcomes are produced, regardless of whether + * they complete successfully. + * + * @see [[both]] for the version which embeds the outcomes to produce a pair + * of the results + * @see [[raceOutcome]] for the version which produces the outcome of the + * winner and cancels the loser of the race + */ def bothOutcome[B](that: IO[B]): IO[(OutcomeIO[A @uncheckedVariance], OutcomeIO[B])] = IO.uncancelable { poll => racePair(that).flatMap { @@ -178,6 +201,16 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { } } + /** + * Runs the current and given IO in parallel, producing the pair of + * the results. If either fails with an error, the result of the whole + * will be that error and the other will be canceled. + * + * @see [[bothOutcome]] for the version which produces the outcome of both + * effects executed in parallel + * @see [[race]] for the version which produces the result of the winner and + * cancels the loser of the race + */ def both[B](that: IO[B]): IO[(A, B)] = IO.both(this, that) @@ -321,6 +354,16 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { def bracketCase[B](use: A => IO[B])(release: (A, OutcomeIO[B]) => IO[Unit]): IO[B] = IO.bracketFull(_ => this)(use)(release) + /** + * Shifts the execution of the current IO to the specified `ExecutionContext`. + * All stages of the execution will default to the pool in question, and any + * asynchronous callbacks will shift back to the pool upon completion. Any nested + * use of `evalOn` will override the specified pool. Once the execution fully + * completes, default control will be shifted back to the enclosing (inherited) pool. + * + * @see [[IO.executionContext]] for obtaining the `ExecutionContext` on which + * the current `IO` is being executed + */ def evalOn(ec: ExecutionContext): IO[A] = IO.EvalOn(this, ec) def startOn(ec: ExecutionContext): IO[FiberIO[A @uncheckedVariance]] = start.evalOn(ec) @@ -613,7 +656,10 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { /** * Makes the source `IO` uninterruptible such that a [[cats.effect.kernel.Fiber#cancel]] - * signal has no effect. + * signal is ignored until completion. + * + * @see [[IO.uncancelable]] for constructing uncancelable `IO` values with + * user-configurable cancelable regions */ def uncancelable: IO[A] = IO.uncancelable(_ => this) @@ -813,6 +859,14 @@ sealed abstract class IO[+A] private () extends IOPlatform[A] { interpret(this) } + /** + * Evaluates the current `IO` in an infinite loop, terminating only on + * error or cancelation. + * + * {{{ + * IO.println("Hello, World!").foreverM // continues printing forever + * }}} + */ def foreverM: IO[Nothing] = Monad[IO].foreverM[A, Nothing](this) def whileM[G[_]: Alternative, B >: A](p: IO[Boolean]): IO[G[B]] = @@ -975,9 +1029,16 @@ object IO extends IOCompanionPlatform with IOLowPriorityImplicits { /** * An IO that contains an empty Option. + * + * @see [[some]] for the non-empty Option variant */ def none[A]: IO[Option[A]] = pure(None) + /** + * An IO that contains some Option of the given value. + * + * @see [[none]] for the empty Option variant + */ def some[A](a: A): IO[Option[A]] = pure(Some(a)) /** @@ -1108,8 +1169,8 @@ object IO extends IOCompanionPlatform with IOLowPriorityImplicits { * finish, either in success or error. The loser of the race is * canceled. * - * The two tasks are executed in parallel if asynchronous, - * the winner being the first that signals a result. + * The two tasks are executed in parallel, the winner being the + * first that signals a result. * * As an example see [[IO.timeout]] and [[IO.timeoutTo]] * diff --git a/core/shared/src/main/scala/cats/effect/ResourceApp.scala b/core/shared/src/main/scala/cats/effect/ResourceApp.scala index c3892b1986..6976c04edc 100644 --- a/core/shared/src/main/scala/cats/effect/ResourceApp.scala +++ b/core/shared/src/main/scala/cats/effect/ResourceApp.scala @@ -18,7 +18,60 @@ package cats.effect import cats.syntax.all._ +/** + * A convenience trait for defining applications which are entirely within + * [[Resource]]. This is implemented as a relatively straightforward wrapper + * around [[IOApp]] and thus inherits most of its functionality and semantics. + * + * This trait should generally be used for any application which would otherwise + * trivially end with [[cats.effect.kernel.Resource!.use]] (or one of its + * variants). For example: + * + * {{{ + * object HttpExample extends IOApp { + * def run(args: List[String]) = { + * val program = for { + * config <- Resource.eval(loadConfig(args.head)) + * postgres <- Postgres[IO](config.jdbcUri) + * endpoints <- ExampleEndpoints[IO](config, postgres) + * _ <- HttpServer[IO](config.host, config.port, endpoints) + * } yield () + * + * program.useForever.as(ExitCode.Success) + * } + * } + * }}} + * + * This example assumes some underlying libraries like [[https://tpolecat.github.io/skunk/ Skunk]] + * and [[https://http4s.org Http4s]], but otherwise it represents a relatively + * typical example of what the main class for a realistic Cats Effect application + * might look like. Notably, the whole thing is enclosed in `Resource`, which is + * `use`d at the very end. This kind of pattern is so common that `ResourceApp` + * defines a special trait which represents it. We can rewrite the above example: + * + * {{{ + * object HttpExample extends ResourceApp.Forever { + * def run(args: List[String]) = + * for { + * config <- Resource.eval(loadConfig(args.head)) + * db <- Postgres[IO](config.jdbcUri) + * endpoints <- ExampleEndpoints[IO](config, db) + * _ <- HttpServer[IO](config.host, config.port, endpoints) + * } yield () + * } + * }}} + * + * These two programs are equivalent. + * + * @see [[run]] + * @see [[ResourceApp.Simple]] + * @see [[ResourceApp.Forever]] + */ trait ResourceApp { self => + + /** + * @see [[IOApp.run]] + */ def run(args: List[String]): Resource[IO, ExitCode] final def main(args: Array[String]): Unit = { @@ -32,12 +85,40 @@ trait ResourceApp { self => } object ResourceApp { + + /** + * A [[ResourceApp]] which takes no process arguments and always produces + * [[ExitCode.Success]] except when an exception is raised. + * + * @see [[IOApp.Simple]] + */ trait Simple extends ResourceApp { + + /** + * @see [[cats.effect.IOApp.Simple!.run:cats\.effect\.IO[Unit]*]] + */ def run: Resource[IO, Unit] + final def run(args: List[String]): Resource[IO, ExitCode] = run.as(ExitCode.Success) } + /** + * A [[ResourceApp]] which runs until externally interrupted (with `SIGINT`), + * at which point all finalizers will be run and the application will shut + * down upon completion. This is an extremely common pattern in practical + * Cats Effect applications and is particularly applicable to network servers. + * + * @see [[cats.effect.kernel.Resource!.useForever]] + */ trait Forever { self => + + /** + * Identical to [[ResourceApp.run]] except that it delegates to + * [[cats.effect.kernel.Resource!.useForever]] instead of + * [[cats.effect.kernel.Resource!.use]]. + * + * @see [[ResourceApp.run]] + */ def run(args: List[String]): Resource[IO, Unit] final def main(args: Array[String]): Unit = {