German Statistisches Bundesamt had announced ad-hoc evaluation of mortality figures for 2020/2021 which show that COVID-19 impact was huge. I’d like to challenge their point here.
Let me start from representing used data by Statistisches Bundesamt in table view with aggregation by year:
Let me draw it as three independent graphs from 0 to 50, 50 to 70, and 70+
It is quite simple after a couple of days of fixing.
I assume that you’re using MacPorts that is working :)
First, you need something that we call bootstrap java. AdoptOpenJDK 1.8 isn’t ported and MacPorts complains about it, but we need it.
So, let install it and SBT.
catap@Kirills-mini-m1 ~ % sudo port install openjdk8 sbt ---> Installing openjdk8 @8u275_0 ---> Activating openjdk8 @8u275_0 ---> Cleaning openjdk8 ---> Computing dependencies for sbt ---> Fetching archive for sbt ---> Attempting to fetch sbt-0.13.18_0.darwin_20.noarch.tbz2 from https://packages.macports.org/sbt ---> Attempting to fetch sbt-0.13.18_0.darwin_20.noarch.tbz2 from https://lil.fr.packages.macports.org/sbt ---> Attempting to fetch sbt-0.13.18_0.darwin_20.noarch.tbz2 from https://nue.de.packages.macports.org/sbt --->…
Do you know that Scala can’t work on Java since version 15?
The cause is openjdk/jdk@113c48f that introduced
isEmpty method to
java.lang.CharSequence since Java 15.
My code can’t be compiled on Java 15+ because
isEmpty can be used from
scala.collection.ArrayOps[Char] and from
ArrayCharSequence that inherits
java.lang.CharSequence with a brand new
Can it be solve? Only with hack at Scala compiler.
The good news that anything that was build before can be run on Java 15.
size method inside
ArrayOps to overstep similar issue with
length that is implemented inside Java’s array for example and I have no idea how it is possibly to solve without renaming method on Scala side.
Anyway you can enjoy discussion.
Let’s play with 2019-nCoV from WHO.
We had a bit data from WHO here: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports/
We can extract from it that number of cases was:
44 cases at 3rd day of the year
282 cases at 20th day of the year
314 cases at 21st day of the year
581 cases at 22nd day of the year
846 cases at 24th day of the year
1320 cases at 25th day of the year
2014 [was predicted as 1843, error 9.2%] as 26th day of the year
2798 [was predicted as 2950, error 5.1%] as 27th day of the year
How you may know I’m in love with Shenandoah. Here I’d like to share with you some results over running elastic search under Shenandoah.
This experiment was made on unnamed cluster with dozens of machine and I’d like to share only two pictures. One from node with enabled Shenandoah for last week, and second one has G1 with settings closer to that I described before. Both machine has very similar number of segments and this cluster uses almost for writing. Let say 99% of operation it is writing.
Summary: at peaks loading it is much better and at night it…
Sometime when you have your elastic you may have an issue when shard was broken.
You may have different root causes, for example:
Anyway, it may happen. And it will happen.
Good news is elastic very tolerant of this sort of issue and may self-fix. But it can’t if you haven’t got a replica of this shard.
You may increase the probability of this by setup index.translog.durability to async but in some cases it may increase write speed. …
Looks like google started to operate DNS-over-TLS at their public available DNS servers.
So, we have at least 3 different provider who offers public DNS-over-TLS:
You can easy add all of this servers to your laptop by knot-resolver.
The first step is install it. I’m using macOS and I’ve run bew install knot-resolver
The next step was to get root certificates of DNS servers and put it near krestd config.
How determinate which one is require? You can do it by openssl s_client -showcerts -connect 220.127.116.11:853 …
Well… sometimes you would like to debug something inside your container.
Let’s image you have a bug that created a memory leak of your application after a lot of hours… let say the week.
When you try to get a memory dump, you realized that your docker container hasn’t got enough space. It has default 10Gb when you need at least 50gb.
You have a three option:
Elastic Search has very interesting idea percolate queries: create some queries, and you can easily ask elastic to match a document to all queries and get the result only of matched queries.
It has an issue with performance. Elastic made some optimizations, likes adding a filter to extract faster part of queries that won’t match. You can read about this at official blogs in two parts: part 1 and part 2.
Let’s imagine that you have a few thousands of queries and you would like to match your document for few kilobytes to this queries, to find which are matched.
If you work with elasticsearch, I expect that you read official suggesting about heap size. If haven’t let me summarise it: