logo separator

[mkgmap-dev] This file has too many entities in a block.

From Gerd Petermann gpetermann_muenchen at hotmail.com on Fri Oct 19 18:35:04 BST 2018

Hi Vojta,

it seems that the pbf file is corrupt, the error message is produced by the lib that is used in splitter.
I'd try to create the file again, maybe it was an IO error. If not,
you may try if osmconvert is able to handle the file, but I would contact the programmer of the program that created the pbf.


Von: mkgmap-dev <mkgmap-dev-bounces at lists.mkgmap.org.uk> im Auftrag von <0174 <v0174 at v0174.net>
Gesendet: Freitag, 19. Oktober 2018 19:03
An: mkgmap-dev at lists.mkgmap.org.uk
Betreff: [mkgmap-dev] This file has too many entities in a block.


I'm trying to solve this problem for some time already, but can't find the solution so I hope this is the right place to ask.

I'm trying to split SRTM contours for Nepal with 10 m contours. It is not that much data, the .pbf file has just 375 MB.

However when I try to run it:

java -jar -Xms4G -Xmx4G ..\mkgmap\Splitter\splitter.jar --output-dir=./SRTM/ --output=pbf --max-threads=6 --status-freq=60 SRTM_NP.pbf

it first starts to give warnings:

Warning: Fileblock has body size too large and may be considered corrupt

then errors:

Exception in thread "worker-3" java.lang.Error: This file has too many entities in a block. Parsers will reject it.
        at crosby.binary.file.FileBlock.newInstance(Unknown Source)
        at crosby.binary.BinarySerializer.processBatch(Unknown Source)
        at uk.me.parabola.splitter.writer.BinaryMapWriter$PBFSerializer$Processor.checkLimit(BinaryMapWriter.java:385)
        at uk.me.parabola.splitter.writer.BinaryMapWriter$PBFSerializer$Processor.process(BinaryMapWriter.java:406)
        at uk.me.parabola.splitter.writer.BinaryMapWriter.write(BinaryMapWriter.java:515)
        at uk.me.parabola.splitter.writer.AbstractOSMWriter.write(AbstractOSMWriter.java:83)
        at uk.me.parabola.splitter.SplitProcessor$OSMWriterWorker.run(SplitProcessor.java:430)
        at java.lang.Thread.run(Unknown Source)

and the resulting files are incomplete.

I tried to lower the --max-nodes parameter, but it just prolongs the process until I got to days of processing time.

Do you have any idea what to tune to avoid this? I splitted files of similar size before, but Nepal seems to be a tough nut to crack. I have Oracle Java 1.8.0_191 64bit (but I had the problems with 32bit Java too).

Thanks in advance,



More information about the mkgmap-dev mailing list