Based on the #156 that was recently merged, I want to propose the following feature: --max-wait
option could take a sequence of maximum wait bounds (or there could be another option for that). For example:
asciinema rec -w 0.4 0.8 1 3
(the format is discussable, probably something with a non-space separator is better/easier to parse: 0.4,0.8,1,3
)
This would mean that
max-wait
)This would allow to make some more adjustments to the time flow of the recording, such as minimizing typing delays (making it more fluent), while still being able to make short and long pauses (to point out something).
What do you think about this feature? I think it's not hard to do and I could implement it.
@sickill any opinion about this?
That's an interesting idea. I have a feeling that this would be used by very small amount of users. -w
as it is today is already very useful but from what I know not many people use it - people just use defaults.
I'm on the fence here... On one hand I really like the idea, on the other I know that it would add fair amount of code, code that will need to be maintained for probably 1% of users (prove me wrong as for the estimates here ;))
What about this:
I had this idea for a while to create a separate set of tools for processing asciicasts. Stuff like speed it up 2x, apply -w
algorithm to already recorded asciicast file, locate+erase arbitrary text (visible passwords).
We could have a mechanism for adding extra commands to asciinema
, done the same way as in git
- when you run asciinema foo
it checks if foo
is its internal command, if not it looks for asciinema-foo
binary in $PATH
and runs it instead. You could write extra commands in any language.
Having above, we could create asciinema-quantize
, or more general asciinema-process
, which could support various switches (like improved -w
, maybe -s
to change speed of the whole recording). It would read input json, process it according to options, and write output json. If it was a popular command it could be promoted to internal command (or be shipped in asciinema packages as asciinema-*
binaries).
I'm open to other suggestions!
Actually I was thinking about the same: doing it externally, just processing the record json file. The other thing is that I don't have much time right now to learn Go, but if this external commands integration would work with any executables named by the convention, this could make extension of asciinema much easier.
Btw, I guess you know about asciinema2gif which is quite useful and just works. I saw some discussion here about gif conversion and that it's not in the plans, and I perfectly understand it, but still as users may have different needs, such extensibility could be a very nice way to let users fulfill their specific needs themselves.
Glad to hear similiar opinion. Are you more familiar with Python? What's the lang you would implement this in?
@sickill well, actually I was using jq with some primitive filters so far. For example:
jq '.stdout |= map(.[0] *= 0.5)' record.json > record.twice-faster.json
Will produce a json which will be played by asciinema play
twice faster. Or
jq '.stdout |= map(.[0] |= ([., 1.234] | min))' record.json > record.cut.json
is the same as setting max-wait
time to 1.234
. I'm not a jq-guru, so probably it can be done easier, but this is quite straightforward (if one is familiar with jq in general) and it works. Implementing the proposed time quantization feature this way is a bit more involved, but also trivial.
This can be, of course, wrapped in any kind of shell script with specified options. But if you want some more integration, it can be also done in a _very similar_ manner using JMESPath, which has various implementations including Python and Go.
Hey @laughedelic ,
I had pretty much the same requirement as yours, so I ended up creating this https://github.com/cirocosta/asciinema-edit
It takes an asciinema cast (v2) and then mutates the event stream according to what you need.
I just finished adding quantization in the way you described, btw 👍
Hope it's useful for you!
Thx!
Hi @cirocosta! Thanks for pinging me! It's awesome that you've made it into a tool and that it works with the v2 format. I'll try it next time I record an asciinema cast.
Most helpful comment
Hey @laughedelic ,
I had pretty much the same requirement as yours, so I ended up creating this https://github.com/cirocosta/asciinema-edit
It takes an asciinema cast (v2) and then mutates the event stream according to what you need.
I just finished adding quantization in the way you described, btw 👍
Hope it's useful for you!
Thx!