Autojump: Database entry(ies) regularly wiped

Created on 16 Nov 2015  ·  35Comments  ·  Source: wting/autojump

Hi,

This is my issue: I keep using j pu which brings me to a specific directory. This works for a certain amount of time, sometimes weeks, sometimes days, then one day it would report . and not be able to jump to my dir. There is nothing in the help that suggests cleaning of the db or anything, I can only guess what's going on... but I have no idea

bug priority-high

Most helpful comment

I don't know but the issue should be fixed in the code anyway.

All 35 comments

Hi,

Same issue here except I think it only happen after the reboot.

Apparently not fixed. This is really annoying. Any idea ?

Sorry for the late response but I have a rough idea of what's happening and how to fix it.

On every directory change autojump locks the data file and updates an entry, stored in a "database" like format. There's a race condition (exacerbated by shell scripts / commands that traverse directories like find) where the lock fails and the db gets overwritten.

The proper way to fix is either:

  1. Fix the file locking to be race condition safe.
  2. Switch from a database like format to an append only log (like .bash_history, .zsh_history, etc), and only calculate the weights when someone invokes autojump to switch directories (aka j).

Hmm I just looked at the code and I don't see any actual locking going on.
Moreover the backup is done right after the move from the temp file to the real DB, so if the move failed (there's no error checking), we're backuping an empty file.

I guess improving this should'nt be that hard.

Maybe #383 didn't guard all places with flock, but there is more simple approach, but just wrapping the whole autojump inside flock, like:
alias autojump='flock /tmp/autojump.lock autojump'

@trou can you test this?

_UPD: fix syntax issue_

I just added it to my bashrc, we'll see.

It doesn't seem to help :(

Hm, how this can happens?
Did you machine powered off abnormally (i.e. via power button)?

Also maybe autojump called from regular sh (instead of bash) in
parallel? Since in this case bash alias won't work, and in this case you
can try by wrapping it via script, something like:
mv /usr/bin/autojump{,.orig}
echo "flock /tmp/autojump.lock autojump.orig" > /usr/bin/autojump

Or I'm missing something completely.

I don't know but the issue should be fixed in the code anyway.

I been using autojump for around a year now, and suddenly - it looks like it cleaned out the entire history. Looking at ~/.local/share/autojump, I see a new file was created. I saw https://github.com/wting/autojump/issues/208, which points here. I'm using:
autojump-22.3.2-1.fc24.noarch

Any other debugging info I can provide?

I also have this regularly is there a way to debug this?

@wting: You mentioned that one way to resolve the problem might be to switch to an "append only log" and calculate weights when autojump is run.

I suppose you are avoiding this solution because the log might become prohibitively long over time. (And, also, because it would be nice if the existing system worked in the way we think it should.)

But perhaps a hybrid solution is possible? Append entries to a log, but add an autojump command that converts them to the database format. When autojump is run, consult both the log and the database to calculate the actual weights.

The foremost downside to this is that it requires the user to collapse the database manually. A work around would be to do a random test each time autojump is run to determine if the db should be collapsed. This would, at least, decrease the odds of a race condition.

@azat: I'm going to try your solution.

Since I implemented a patch proposed in #482, I have not experienced this bug.

I'm interested in hearing @r-barnes results, too, though. If successful, is there a reason this "early locking" could not be used in autojump?

Argh! Got an update that overwrote the patch from #482. It happened on the next reboot. I don't understand how other people aren't experiencing this. It happens to me constantly.

I did not have 100% success with #482. The database still seems to clear or, perhaps, is limited in size. It doesn't seem to grow much beyond 23 entries for me.

I did have 100% success with #482. April 21 until July 25. I have 279 entries currently. This suggests our situations are different, which means there are multiple triggers for this bug. Whatever I'm doing that causes it, is always related to the different filesystems that autojump uses to managed the db, like @Frefreak suggested. And, just as I suggested, there could be multiple code paths that lead to the same bug, some of which, I just never use but you do.

Ah, it happened again someday last week, after so long a time since last time.349 entries was removed, file size shrank from 93K to 77K.

any updates?

I've ended up developing my solution (zsh only). Way smaller, more useful and more reliable :)
https://github.com/kurkale6ka/zsh (the README on that link)

Looks like here are several existing alternatives. I'm trying 'fasd' now https://github.com/clvv/fasd , which is just one.

See https://github.com/rupa/z/issues/198 -- similar bug report in another project that has a similar issue.
I can't find a working autojump-like solution :(

FYI, I haven't had this issue for more than one year after I changed the temporary file to the same directory as the data file. The patch is:

--- autojump_data.py    2018-09-07 15:28:30.488681864 +0800
+++ /usr/bin/autojump_data.py   2017-08-26 15:43:50.136781805 +0800
@@ -120,11 +120,12 @@

 def save(config, data):
     """Save data and create backup, creating a new data file if necessary."""
-    create_dir(os.path.dirname(config['data_path']))
+    data_dir = os.path.dirname(config['data_path'])
+    create_dir(data_dir)

     # atomically save by writing to temporary file and moving to destination
     try:
-        temp = NamedTemporaryFile(delete=False)
+        temp = NamedTemporaryFile(delete=False, dir=data_dir)
         # Windows cannot reuse the same open file name
         temp.close()

Moving files across devices is not atomic (it does a copy + delete operation) so they should be put in the same device to overwrite atomically.

That makes a lot of sense to me. A move is only atomic if it's on the same partition...

This is implemented in v22.5.2 here: https://github.com/wting/autojump/commit/bc4ea615462adb15ce53de94a09cec30bcc5dc0a

For now please install and test from source and report back if data is still being lost.

I find that actually I opened a pull request #495 with the change last year, but it went unnoticed.

I'm not sure if the time is enough for testing, but I don't have problems found with database wiping. What do you think @wting. And if it is right for you, can you create a new release, so that distros can it picked up?

I started to suffer from this wiping problem. After several hours, autojump.txt is wiped out. Anyway to debug it?

I recently noticed autojump failing to work as expected. Then I checked to find:

[hendry@t480s ~]$ wc -l ~/.local/share/autojump/autojump.txt
35 /home/hendry/.local/share/autojump/autojump.txt

Er... it appears to have been reset. Any ideas WHY?

There was a race condition that occasionally wiped the database entry that was fixed in v22.5.3.

@minjang, @kaihendry: Can y'all run autojump -v and share the version number listed? If it's less than that version then please upgrade and/or install from source.

[hendry@t480s ~]$ autojump -v
autojump v22.5.1
[hendry@t480s ~]$ pacman -Qi autojump
Name            : autojump
Version         : 22.5.1-2
Description     : A faster way to navigate your filesystem from the command line
Architecture    : any
URL             : https://github.com/wting/autojump
Licenses        : GPL3
Groups          : None
Provides        : None
Depends On      : python
Optional Deps   : None
Required By     : None
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 121.00 KiB
Packager        : Felix Yan <[email protected]>
Build Date      : Sat 10 Nov 2018 06:58:45 AM
Install Date    : Wed 14 Nov 2018 08:57:48 AM
Install Reason  : Explicitly installed
Install Script  : No
Validated By    : Signature

I'm an Archlinux user btw :rofl:

In 18.04.2 LTS, I have v22.5.1. I'm not sure if the update has made it to the Debian/Ubuntu repos, or if the version's been frozen. Installed update: we'll see how it goes! (Thanks so much.)

I'm not sure who maintains the Debian package for autojump these days, but it's possibly that they only build off stable tags. While master branch has been on v22.5.3 for >6 months, I only tagged v22.5.3 tonight.

Your best bet to deal with this random data loss is to install from source following these instructions.

Was this page helpful?
0 / 5 - 0 ratings