Aion BetaNet MegaThread v0.1.14

Following from @Jason’s v0.1.13 MegaThread, I am going to go ahead and start the v0.1.14 MegaThread.

Version: Aion v.0.1.14 (aion-v0.1.14.bdd2b40-2018-03-05)

March 5, 2018


Aion community please update your kernel with this release for period March 5 - 11.

  • fix various p2p/syncing issues
  • implement upnp
  • update status dump
  • fix nonce manager and LRUMap issues
  • optimize EquiValidator
  • modularize fastvm

Github issues:

Notes issues:

  • ERROR GEN [p2p-write]: addPendingTransactionImpl tx is rejected due to: INVALID_NONCE | a Bug, showing in node logs due to mass transaction testing, will be patched in upcoming release.

Syncing seems to be much faster with this version.


I just synced two machines from scratch and they both ended up on different blocks.

As of this comment:

  • machine #1 is on block 298406
  • machine #2 is on block 298555

Edit: After some more time, it appears as though they are both on the same block now.

As of the time of this edit: block 298707

1 Like

There is a note on the release with an excellent walk though on how to recover from sidechain.

Yes, that is a great explanation on how to determine if you’re on a sidechain and how to correct it.

My concern though, is that that should really never happen. As in like never.

The heaviest chain defined as “highest total difficulty” should always be the dominant and followed chain.

Couldn’t agree more.

1 Like

Good evening
Just did the install and it is getting easier as you run through the process. I don’t have a lot of horse power on my VM but it is chugging away. The only only error that I’m seeing intermittently in the early stages is this
18-03-05 17:09:18.758 INFO SYNC [sync-import]: <import-best num=49536 hash=7637c4 txs=0> 18-03-05 17:09:20.994 ERROR SYNC [p2p-write]: <res-bodies invalid> 18-03-05 17:09:21.795 INFO SYNC [sync-import]: <import-best num=49537 hash=613daf txs=0>
The pattern is consistent it starts with one then it runs for a few minutes and does another couple messages
18-03-05 17:05:00.775 ERROR SYNC [p2p-write]: <res-bodies invalid> 18-03-05 17:05:01.052 ERROR SYNC [p2p-write]: <res-bodies invalid>

I didn’t paste these in sequence but it is consistent. Going to just let it run and see what happens
Seems faster than the previous version
Jim M

1 Like

I saw those errors in my sync as well.
And yeah syncing the full chain is much faster now.

So I went through the steps to check if i’m on a side chain and I noticed that it says to find your node in the p2p-status section, but my node isn’t listed and in the example their node isn’t listed either.

Status update, we are fully synced and it seems to be working. We ran into a couple more hiccups but by closing down the node and restarting it seemed todo the trick. I am going to post a couple of the messages we had show up but I don’t know what they mean and only post for the group. If it is something I need to change in the config file let me know if it is just a random error. Aion do with it what you want :slight_smile:

[18-03-05 18:59:05.820 INFO  SYNC [sync-import]: <import-best num=299829 hash=980bc0 txs=0>
18-03-05 18:59:06.573 INFO  SYNC [sync-import]: <import-best num=299830 hash=9a69c0 txs=0>
java: ./build/native/equi_miner.h:395: void* htalloc::alloc(u32, u32): Assertion `mem' failed.
./ line 41: 12062 Aborted                 (core dumped) env EVMJIT="-cache=1" ./rt/bin/java -Xms2g -cp "./lib/*:./lib/libminiupnp/*:./mod/*" org.aion.Aion "$@" ]

We then restarted the node and got this

18-03-05 19:05:17.305 INFO  GEN  [main]: loaded block <num=299830, root=a6a1769e... l=32>
18-03-05 19:05:17.830 INFO  GEN  [main]: <node-started endpoint=p2p://0ec5a465-3e3a-499e-b858-53bdf868dfb0@>
18-03-05 19:05:17.986 INFO  CONS [main]: <sealing-disabled>
18-03-05 19:06:42.704 INFO  SYNC [sync-import]: <import-best num=299831 hash=205fbc txs=0>

Not sure if I missed anything but then we closed down when it hung again. (This might have been a memory error on our side but again I post for reference only. ( I have no idea what this says as I am Noob at this as Noob can be )

18-03-05 19:06:46.845 INFO  SYNC [sync-import]: <import-best num=300008 hash=9512f3 txs=0>
18-03-05 19:06:47.855 INFO  SYNC [sync-import]: <import-best num=300009 hash=f95081 txs=0>
18-03-05 19:06:48.914 ERROR SYNC [p2p-write]: <res-bodies invalid>
^C18-03-05 19:09:34.240 INFO  GEN  [Shutdown]: Starting shutdown process...
18-03-05 19:09:34.250 INFO  GEN  [Shutdown]: Shutting down zmq ProtocolProcessor
18-03-05 19:09:37.253 INFO  GEN  [Shutdown]: Shutdown zmq ProtocolProcessor... Done!
18-03-05 19:09:37.254 INFO  GEN  [Shutdown]: Shutting down sealer
18-03-05 19:09:37.254 INFO  GEN  [Shutdown]: Shutdown sealer... Done!
18-03-05 19:09:37.254 INFO  GEN  [Shutdown]: Shutting down the AionHub...
18-03-05 19:09:37.255 INFO  GEN  [Shutdown]: <KERNEL SHUTDOWN SEQUENCE>
18-03-05 19:09:37.255 INFO  GEN  [Shutdown]: <shutdown-sync-mgr>
18-03-05 19:09:37.360 INFO  GEN  [Shutdown]: <shutdown-p2p-mgr>
18-03-05 19:09:37.361 INFO  GEN  [Shutdown]: TransactionExecThread shutting down...
18-03-05 19:09:37.373 INFO  GEN  [Shutdown]: TransactionExecThread waiting termination.
18-03-05 19:09:37.373 INFO  GEN  [Shutdown]: TransactionExecThread shutdown... Finished!
18-03-05 19:09:37.373 INFO  GEN  [Shutdown]: <shutdown-tx>
 at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(
 at java.base/java.util.concurrent.locks.ReentrantLock.lockInterruptibly(
 at java.base/java.util.concurrent.ArrayBlockingQueue.poll(
 at org.aion.p2p.impl.P2pMgr$ Source)
 at java.base/java.util.concurrent.Executors$
 at java.base/
 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(
 at java.base/java.util.concurrent.ThreadPoolExecutor$
 at java.base/java.lang.Thread

Other than that we restarted the Kernel again and it seems to be working fine.
Hopefully this didn’t make you laugh. I just like to share. I am going to post another piece over on the Web3 string as that is what we are trying to get to work now and had some challenges with that.
Have a good night. :slight_smile:

1 Like

I am still experiencing the random hang. No errors are being written to the node output, but my node is now a solid 800 blocks behind the seed nodes, and my self-num has not updated in a while.

I’m sure restarting, and possibly reverting some will fix the issue temporarily, but without watching the node like a hawk it is difficult to identify when this happens. Is there anything specific in the output I should be looking for? Anything I can provide that would help track down the source of this issue?

Managed to get my node to sync. Took a few goes - i reverted to block 290000 a couple of times and now up to date.

Just a quick forum tip for you and anyone else that didn’t know this. If you copy paste some code, encapsulate it like this:

[some code]

and it will look like below, which is so much easier to read.

[some code]

You can also do `this` and it will look this this, for inline code styling.


Currently syncing my node. netstat -antp | grep java is showing 17 peers!

@jason there still seems to be a flaw in the consensus / chain following code.

In my mind, the logic should be something like the following:

while( Running )
bool Syncing = (AmITheHeaviestChainAmongMyPeers() == false);

if( Syncing )
{ whatever houskeeping needs to be done in order to sync...
if( Mining )
...attempt to mine block and submit it as the new tip...

Right now there are effectively a bunch of 51% attacks happening and the logic in the client is seemingly and inadvertently facilitating it.

In my opinion, you should not be able to go back and mine starting at an older block without having to explicitly modify the client. Meaning…you are intending to conduct a 51% attack and in order to do so you have to go change code in the client in order to do it.

Right now, the client is basically allowing you to mine a sidechain (i.e. conduct a 51% attack) because it appears that it’s making no attempt to branch to the heaviest chain (by total difficulty) in a timely manner.

Bottom line is – it seems to me that the very first thing the client should do when it starts up is:

  • check to see if it’s on the heaviest chain among its peers
  • if it’s not, then the priority above everything else is to catch up
  • if it is, then make every attempt to keep up while allowing mining
  • if at any time it detects that it’s not caught up, even if it thought it was, then branch and catch up

cc: @closer, @Jim, @CryptoCow

1 Like

I hear ya. I mined a side chain for a couple of days there… (on release 13)

Well I don’t think it’s really like a 51% attack, because it just loses sync with the main chain and continues from that point on it’s own. With 51% you’d actually have to go back on the chain and then catch up with the main chain, which you can only really do easily if you own 51% of hashing power. In this case you are just one node so it just does a useless fork.

Then again, I do agree with what you are saying. It should just never continue mining if it loses sync with the main chain, but try to resync with it instead. Not sure how to do that in practice though, I’m don’t have a lot of experience with building blockchains.

On another note, I haven’t had any problems so far with 0.1.14

By definition that is exactly what a 51% attack is. Of course you have to have enough hashpower to catch up on your own.

I’m pretty sure I did it last night mining with 6 x 1080’s for a little while.

Regardless, it can also be a combination of people who are collectively stalled on a sidechain and then catch up by mining it.

The point being – once the client is no longer “caught up” it should stop submitting mined blocks and prioritize catching up above all else. Otherwise, it is indeed facilitating 51% attack opportunities and behavior characteristics.


So happy Aion have their explorer up and running. I seem to be stuck on a side chain again, at least now I know the last block I mined. Now to get remote access to my box so I can run the revert command (./ -r ).

1 Like

Yeah I need to set up remote access too - it’s so annoying being at work and wondering if I’m stuck on a side chain or not :stuck_out_tongue:

On the upside I was able to get off my side chain after about 4 hours today without having to manually intervene.

Just an interesting behavior I observed just now.

I sent a series of transactions to another address and noticed they did not get processed. Upon further inspection, my node had stalled (had stopped updating self-num). After stopping the kernel and restarting, the kernel caught up to the current block and all queued transactions processed.