admin管理员组文章数量:1316008
I was trying to understand if I use Dispatchers.IO instead of Dispatchers.Default using this code
runBlocking {
val list = mutableListOf<Job>()
val time1 = System.currentTimeMillis()
for (i in 1..100){
val job = lifecycleScope.launch(Dispatchers.IO) {
val c = "c$i"
Log.i("CoroutineRunnerLog", "Launched a coroutine $c on Thread : ${Thread.currentThread().name}")
for (j in 1..10){
Log.d("CoroutineRunnerLog", "Coroutine $c, j = $j, Thread : ${Thread.currentThread().name}")
}
}
list.add(job)
}
list.joinAll()
val time2 = System.currentTimeMillis()
Log.e("CoroutineRunnerLog", "difference is = ${time2 - time1}")
}
Since Dispatchers.IO can expand upto 64 threads,and Dispatchers.Default has max threads equal to CPU cores(8 in my case)
Expected behaviour should be Dispatcher.IO should have better performance(I am not sure here)
I just want to understand why Dispatchers.Default is running faster than Dispatchers.IO despite of having lesser number of threads in its thread pool.
Output :
Dispatchers.IO = 200 - 300 ms
Dispatchers.Default = 80 - 100 ms
I was trying to understand if I use Dispatchers.IO instead of Dispatchers.Default using this code
runBlocking {
val list = mutableListOf<Job>()
val time1 = System.currentTimeMillis()
for (i in 1..100){
val job = lifecycleScope.launch(Dispatchers.IO) {
val c = "c$i"
Log.i("CoroutineRunnerLog", "Launched a coroutine $c on Thread : ${Thread.currentThread().name}")
for (j in 1..10){
Log.d("CoroutineRunnerLog", "Coroutine $c, j = $j, Thread : ${Thread.currentThread().name}")
}
}
list.add(job)
}
list.joinAll()
val time2 = System.currentTimeMillis()
Log.e("CoroutineRunnerLog", "difference is = ${time2 - time1}")
}
Since Dispatchers.IO can expand upto 64 threads,and Dispatchers.Default has max threads equal to CPU cores(8 in my case)
Expected behaviour should be Dispatcher.IO should have better performance(I am not sure here)
I just want to understand why Dispatchers.Default is running faster than Dispatchers.IO despite of having lesser number of threads in its thread pool.
Output :
Dispatchers.IO = 200 - 300 ms
Dispatchers.Default = 80 - 100 ms
Share
Improve this question
asked Jan 30 at 11:38
Roop KishoreRoop Kishore
3484 silver badges14 bronze badges
1
- Each job is not really doing much. I expect more time is taken up creating threads than running them. Also, please read stackoverflow/questions/504103/… – k314159 Commented Jan 30 at 12:55
3 Answers
Reset to default 3Running CPU-bound jobs on more threads than the number of CPU cores results in excessive context switching and thread management overhead, increasing the total execution time.
Since your jobs only compute without waiting, using Dispatchers.Default
reduces the total execution time.
However, if you introduce waiting (e.g., Thread.sleep(500)
) in your jobs, Dispatchers.IO
will complete the total work faster by allowing other tasks to utilize the freed-up threads while waiting.
Expected behaviour should be Dispatcher.IO should have better performance
This is fundamentally incorrect. If you have 8 cores running CPU bound tasks you should expect worse performance when using more than 8 threads simultaneously. Every extra thread entails context switching that would not occur if there were 8 threads or less running on the CPU.
Dispatchers.IO can expand upto 64 threads
Dispatchers.IO has a lot of threads because these threads are designed to do blocking calls (e.g network, reading from the disk)
Using a lot of threads only has an advantage if those threads are actually running on the CPU a short time each.
To sum up, A CPU running on 100% has no benefit from more threads than cores, quite literally the opposite.
On a final note, I'd like to stress that it is imperative that you don't block threads on the Dispatchers.Default dispatcher. Since there are a limit amount of threads on that dispatcher, any blocked thread will very much hurt the performance of your code.
If you plan to call a blocking operation, use withContext
as such:
withContext(Dispatchers.IO){
// blocking code goes here
}
I would say it because the Context Switching Overhead in Dispatchers.IO
- Dispatchers.IO can create more threads than CPU cores, meaning some threads will be suspended and resumed frequently.
- Thread context switching is expensive: If a thread is paused and resumed frequently, it adds overhead.
- Dispatchers.Default, on the other hand, sticks to the number of CPU cores and avoids unnecessary thread swapping.
In your case:
- Default has fewer threads = less context switching = faster execution.
- IO has more threads = more context switching = slower execution.
What Should You Use?
- Since your workload is CPU-bound (loops, logging, minor operations), Dispatchers.Default is the better choice.
- If you were making network requests, database queries, or file I/O, then Dispatchers.IO would be better.
If your workload doesn't heavily rely on I/O blocking, Dispatchers.Default will usually be faster due to fewer threads and less context switching overhead.
本文标签:
版权声明:本文标题:kotlin - Why Dispatchers.Default runs faster than Dispatchers.IO in coroutines for simple iteration when launched 100 coroutines 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741969056a2407722.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论