Cross-platform command-line AV1 / VP9 / HEVC / H264 encoding framework with per scene quality encoding. Fork of https://github.com/master-of-zen/Av1an
Go to file
Luigi311 308e97047e
Add probe slow, add devcontainer, update readme (#289)
* Dockerfile: Update copy with chown, seperate copy requirements from all
* Add probe-slow support
* README: Update readme with new rust cli options
* Dockerfile: Hide av1an venv
* Add devcontainer
* Action: Add baseline-select
2021-07-09 19:32:47 +03:00
.devcontainer Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
.github/workflows Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
av1an Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
av1an-cli Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
av1an-core log probes to rust 2021-07-08 03:27:48 +03:00
av1an-encoder-constructor Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
av1an-pyo3 Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
av1an-scene-detection Port tqdm to Rust with indicatif crate (#284) 2021-07-04 03:50:02 +03:00
cli Simplify a lot of python code 2021-07-06 14:57:50 +03:00
docs some docs 2021-04-28 17:17:53 +03:00
.codeclimate.yml Disable file-lines and method-lines check for codeclimate, enable fixme plugin to mark TODOs/FIXMEs 2020-05-06 11:49:48 -06:00
.gitignore Probe slow (#277) 2021-06-09 00:39:14 +03:00
appveyor.yml Port tqdm to Rust with indicatif crate (#284) 2021-07-04 03:50:02 +03:00
av1an.py separate main package and cli 2021-05-16 09:25:33 +03:00
Cargo.toml Wip encoder constructor, encoder defaults to rust 2021-06-21 13:26:16 +03:00
CHANGELOG.md Removed per frame target quaity 2021-06-04 05:05:27 +03:00
Dockerfile Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
LICENSE.md License change 2020-06-22 15:58:03 +03:00
MANIFEST.in wip 2021-05-16 09:25:33 +03:00
README.md Add probe slow, add devcontainer, update readme (#289) 2021-07-09 19:32:47 +03:00
requirements.txt Port tqdm to Rust with indicatif crate (#284) 2021-07-04 03:50:02 +03:00
rustfmt.toml formating 2021-05-16 09:25:33 +03:00
setup-minimal.py Completely removed VVC support (for now) 2021-06-01 16:53:00 +03:00
setup.py simplify startup setup 2021-07-08 06:26:17 +03:00


Av1an

A cross-platform framework to streamline encoding

alt text

Discord server

Easy, Fast, Efficient and Feature Rich

An easy way to start using AV1 / HEVC / H264 / VP9 / VP8 encoding. AOM, RAV1E, SVT-AV1, SVT-VP9, VPX, x265, x264 are supported.

Example with default parameters:

av1an -i input

With your own parameters:

av1an -i input -e aom -v " --cpu-used=3 --end-usage=q --cq-level=30 --threads=8 " -w 10
--split-method aom_keyframes --target-quality 95 --vmaf-path "vmaf_v0.6.1.pkl"
-min-q 20 -max-q 60 -f "-vf scale=-1:1080" -a "-c:a libopus -ac 2 -b:a 192k"
-s scenes.csv -log my_log -o output

Usage

-i   --input            Input file(s), or Vapoursynth (.py,.vpy) script
                        (relative or absolute path)

-o   --output-file      Name/Path for output file (Default: (input file name)_(encoder).mkv)
                        Output file ending is always `.mkv`

-e --encoder            Encoder to use
                        (`aom`,`rav1e`,`svt_av1`,`vpx`,`x265`, `x264`)
                        Default: aom
                        Example: -enc rav1e

-v   --video-params     Encoder settings flags (If not set, will be used default parameters.)
                        Must be inside ' ' or " "

-p   --passes           Set number of passes for encoding
                        (Default: AOMENC: 2, rav1e: 1, SVT-AV1: 1, SVT-VP9: 1,
                        VPX: 2, x265: 1, x264: 1)

-w   --workers          Override number of workers.

-r   --resume           If encode was stopped/quit resumes encode with saving all progress.
                        Resuming automatically skips scenedetection, audio encoding/copy,
                        splitting, so resuming only possible after actual encoding is started.
                        Temp folder must be present to resume.

--keep                  Doesn't delete temporary folders after encode has finished.

-q --quiet              Do not print a progress bar to the terminal.

-l --logging            Path to .log file(By default created in temp folder)

--temp                  Set path for the temporary folder. Default: .temp

-c --concat             Concatenation method to use for splits Default: ffmpeg
                        [possible values: ffmpeg, mkvmerge, ivf]

--webm                  Outputs webm file.
                        Use only if you're sure the source video and audio are compatible.

FFmpeg options

-a   --audio-params     FFmpeg audio settings (Default: copy audio from source to output)
                        Example: -a '-c:a libopus -b:a  64k'

-f  --ffmpeg           FFmpeg options video options.
                        Applied to each encoding segment individually.
                        (Warning: Cropping doesn't work with Target VMAF mode
                        without specifying it in --vmaf-filter)
                        Example:
                        --ff " -vf scale=320:240 "

--pix-format            Setting custom pixel/bit format for piping
                        (Default: 'yuv420p10le')
                        Options should be adjusted accordingly, based on the encoder.

Segmenting

--split-method          Method used for generating splits.(Default: PySceneDetect)
                        Options: `pyscene`, `aom_keyframes`, `none`
                        `pyscene` - PyScenedetect, content based scenedetection
                        with threshold.
                        `aom_keyframes` - using stat file of 1 pass of aomenc encode
                        to get exact place where encoder will place new keyframes.
                        (Keep in mind that speed also depends on set aomenc parameters)
                        `ffmpeg` - Uses FFmpeg built in content based scene detection
                        with threshold. Slower and less precise than pyscene but requires
                        fewer dependencies.
                        `none` -  skips scenedetection. Useful for splitting by time

-m  --chunk-method      Determine the method in which chunks are made for encoding.
                        By default the best method is selected automatically in this order:
                        vs_ffms2 > vs_lsmash > hybrid.
                        vs_ffms2 or vs_lsmash are recommended.
                        ['hybrid'(default), 'select', 'vs_ffms2', 'vs_lsmash']


-t  --threshold         PySceneDetect threshold for scene detection Default: 35

-s   --scenes           Path to file with scenes timestamps.
                        If the file doesn't exist, a new file will be generated
                        in the current folder.
                        First run to generate stamps, all next reuse it.
                        Example: "-s scenes.csv"

-x  --extra-split       Adding extra splits if frame distance between splits bigger than the
                        given value. Pair with none for time based splitting or with any
                        other splitting method to break up massive scenes.
                        Example: 1000 frames video with a single scene,
                        -xs 200 will add splits at 200,400,600,800.

--min-scene-len         Specifies the minimum number of frames in each split.

Target Quality

--target-quality        Quality value to target.
                        VMAF used as substructure for algorithms.
                        Supported in all encoders supported by Av1an.
                        Best works in range 85-97.
                        When using this mode, you must specify full encoding options.
                        These encoding options must include a quantizer based mode,
                        and some quantizer option provided. (This value will be replaced)
                        `--crf`,`--cq-level`,`--quantizer` etc

--target-quality-method Type of algorithm for use.
                        Options: per_shot

--min-q, --max-q        Min,Max Q values limits
                        If not set by the user, the default for encoder range will be used.

--vmaf                  Calculate VMAF after encoding is done and make a plot.

--vmaf-path             Custom path to libvmaf models.
                        example: --vmaf-path "vmaf_v0.6.1.pkl"
                        Recommended to place both files in encoding folder
                        (`vmaf_v0.6.1.pkl` and `vmaf_v0.6.1.pkl.model`)
                        (Required if VMAF calculation doesn't work by default)

--vmaf-res              Resolution scaling for VMAF calculation,
                        vmaf_v0.6.1.pkl is 1920x1080 (by default),
                        vmaf_4k_v0.6.1.pkl is 3840x2160 (don't forget about vmaf-path)

--probes                Number of probes for interpolation.
                        1 and 2 probes have special cases to try to work with few data points.
                        The optimal level is 4-6 probes. Default: 4

--probe-slow            Use video encoding parameters for vmaf probes to get a more 
                        accurate Q at the cost of speed.

--vmaf-filter           Filter used for VMAF calculation. The passed format is filter_complex.
                        So if crop filter used ` -ff " -vf crop=200:1000:0:0 "`
                        `--vmaf-filter` must be : ` --vmaf-filter "crop=200:1000:0:0"`

--probing-rate          Setting rate for VMAF probes (Every N frame used in probe, Default: 4)

--vmaf-threads          Limit number of threads that are used for VMAF calculation
                        Example: --vmaf-threads 12
                        (Required if VMAF calculation gives error on high core counts)

Main Features

Splitting video by scenes for parallel encoding because AV1 encoders are currently not very good at multithreading and encoding is limited to a very limited number of threads.

  • PySceneDetect used for splitting video by scenes and running multiple encoders.
  • Vapoursynth script input support.
  • Fastest way to encode AV1 without losing quality, as fast as many CPU cores you have :).
  • Target Quality mode. Targeting end result reference visual quality. VMAF used as a substructure
  • Resuming encoding without loss of encoded progress.
  • Simple and clean console look.
  • Automatic detection of the number of workers the host can handle.
  • Builds the encoding queue with bigger files first, minimizing waiting for the last scene to encode.
  • Both video and audio transcoding with FFmpeg.
  • Logging of the progress of all encoders.

Install

Warning! Av1an GIT is currently under state of changing. Building and using latest Av1an GIT is differs from PIP stable.

For current latest follow this instructions. If latest changes not required, just use PIP version

Docker

Av1an can be run in a Docker container with the following command if you are in the current directory Linux

docker run --privileged -v "$(pwd):/videos" --user $(id -u):$(id -g) -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

Windows

docker run --privileged -v "${PWD}:/videos" -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

Docker can also be built by using

docker build -t "av1an" .

To specify a different directory to use you would replace $(pwd) with the directory

docker run --privileged -v "/c/Users/masterofzen/Videos":/videos --user $(id -u):$(id -g) -it --rm masterofzen/av1an:latest -i S01E01.mkv {options}

The --user flag is required on linux to avoid permission issues with the docker container not being able to write to the location, if you get permission issues ensure your user has access to the folder that you are using to encode.

Docker tags

The docker image has the following tags

Tag Description
latest Contains the latest stable av1an version release
master Contains the latest av1an commit to the master branch
sha-##### Contains the commit of the hash that is referenced
#.## Stable av1an version release

Support the developer

Bitcoin - 1GTRkvV4KdSaRyFDYTpZckPKQCoWbWkJV1