Make a video file from a sequence of still images using FFmpeg.
ffmpeg(dir = ".", pattern, output, output_dir = ".", rate = "ntsc", delay = 1, start = 1, size = "source", preset = "ultrafast", codec = "default", format = "yuv420p", lossless = FALSE, min.rate = 10, fps.out = rate, alpha = 1, overwrite = FALSE, glob = FALSE, details = FALSE)
dir | directory containing images, defaults to working directory. |
---|---|
pattern | character, for matching a set of input image files. See details for acceptable and possible alternative patterns. |
output | character, output file name. |
output_dir | character, output directory. Defaults to working directory. |
rate | integer or character, intended framerate of input image sequence in Hertz (Hz) or frames per second (fps). See details. |
delay | numeric, time delay between frames in output video. Alternative to |
start | integer, frame to start from in input image sequence. Defaults to |
size | character, the dimensions of the video output. Defaults to |
preset | character, encoding presets available in FFmpeg. Defaults to |
codec | character, the video codec used. See details. |
format | character, the pixel format. See details. |
lossless | logical, use lossless H.264 encoding if applicable. Defaults to |
min.rate | integer, the minimum frame rate for non- |
fps.out | integer or character, framerate of output video. This can be given in the same ways as |
alpha | numeric, from 0 to 1.0. Only applicable when |
overwrite | logical, overwrite existing output file. |
glob | logical, defaults to |
details | logical, whether to show FFmpeg output on the R console. |
returns the system call to FFmpeg as a character string.
ffmpeg
is a wrapper function around the popular FFmpeg command line multimedia framework.
It translates arguments provided by the user in familiar R syntax into a system call to the ffmpeg
command line tool, which must be installed on the system.
ffmpeg
does not provide complete flexibility to allow making every possible valid call to FFmpeg,
but users who are that well versed in the use of FFmpeg can use the command line utility directly or pass their custom calls directly to system
from within R.
The ffmpeg
R function is primarily useful to users not well versed in the use of the FFmpeg multimedia framework who will be satisfied with the level of flexibility provided.
Since this function is provided in the context of the mapmate
package, it is aimed at assisting with converting still image sequences to video.
While additional uses may be incorporated into ffmpeg
in future, the FFmpeg multimedia framework itself provides a far broader suite of tools and functionality than is needed here.
Keep in mind that the purpose of mapmate
is not to generate animations directly from R. See packages like animation
if that is more the goal.
The goal that mapmate
attempts to fulfill is strictly that of animation pre-production and it does so by focusing on the generation of still image sequences.
Any animation is expected to be done later by the user via software dedicated to video editing and production.
ffmpeg
is provided in mapmate
as an exception to the rule for users who wish to trade the full control and flexibility over video editing and production
that mapmate
aims to avoid entangling itself with for the convenience of generating relatively basic video output directly from an R session.
Ultimately, if you want an incredibly fancy video, do not rely on ffmpeg
to splice and merge and overlay and juxtapose all your layers together,
to crop and pan and rotate, to apply effects and transitions and every other form of video processing to your image sequences; finish the production outside of R, because that is what makes sense.
If you are an FFmpeg expert, you don't need to use ffmpeg
at all (but perhaps consider helping to improve this code!).
If you are not an FFmpeg expert, use other video editing software.
There always comes a point where it makes the most sense to transition from one application to another.
When external solutions exist, it does not make sense to port the solution to every problem into R.
Future package versions may provide more and more functionality and control over video production directly from R through ffmpeg
or other functions,
but at this time this should not be a primary development goal for mapmate
.
myimages%04d.png
,
which requires specifying the entire, non-changing file name with the only substitution being for the unique, order, consecutive integer file numbering component of the file name.
The pattern used indicates how may places are occupied by the file indices, which should be constant. In this example, %04d
represents the file numbering 0000, 0001, 0002, ..., 9999
.
If using Windows, you must use this approach. Any image sequences generated by mapmate
will follow this kind of file naming convention.
If you want to make videos from image sequences not made by mapmate
, they will also commonly follow this convention, but not always, in which case you will have to rename your files.
An alternative and often convenient way to provide a general pattern for matching to a set of input files is with globbing. However, globbing is not available on Windows.
Linux users may find this additional option helpful in cases where file naming is not quite as described above or, for example, if there are multiple sequences of files in one directory.
If glob=TRUE
, wildcards can be used in the pattern
argument, e.g., pattern="*png"
, pattern="myimages*png"
, or pattern="*images0*.png"
.
The default is glob=FALSE
and glob
is simply ignored on Windows.
The current package version of ffmpeg
allows merging more than two sequences without error,
but testing has not confirmed this is actually working correctly, as all layers do not always appear in the output video.
pattern
may be a vector referring to multiple image sequences. This is for merging or blending layers into one output video file.
The first vector element refers to the top layer among image sequences.
All files do not need to be in the same directory; dir
can be vectorized to match with pattern
if sequences are in different locations.
Similarly, rate
, delay
, and start
can be vectors. If nothing but pattern
is a vector, the other arguments are duplicated.
Vectors should be of equal length.
Merging capabilities are limited. An expert in the use of FFmpeg should use it directly and not via this wrapper function.
If merging sequences with this function, it is recommended they be the same number of frames, begin from the same starting frame, and proceed at the same frame rate, though this is not strictly required.
Also, merging only two sequences at a time is recommended or they may not all display.
Sequences must be very similar in a variety of respects. For example, images must be the same dimensions across sequences.
For greater control, use FFmpeg directly from the command line and consult official FFmpeg documentation, or help improve this wrapper function via Github issues and pull requests.
Remember that mapmate
generates still image sequences that are intended for later use in a dedicated video editing program, one with a GUI, unlike FFmpeg which is a command line application.
In such a program, it is assumed the user may be dropping multiple image sequences on different tracks of a project timeline, layering the tracks together,
and for this reason the default background png color is transparent.
In the default case, using alpha
less than 1.0
is generally unnecessary when merging two image sequences into a video with FFmpeg.
If not using defaults, alpha
may not provide the flexibility desired.
rate
, non-integer numeric values are rounded. Character options may be a valid abbreviation such as "ntsc"
or a quoted ratio such as "30000/1001"
.
Note that this is the familiar "29.97" (or, 29.97003, to be exact) but FFmpeg does not accept values like these.
Using delay
instead of rate
is more limiting since delay
is converted back to rate (\(delay=1/rate\)), but must then be rounded to an integer.
Using rate
is recommended. Arbitrary, non-standard framerates may lead to rendered videos that do not play properly in many media players.
For common settings and character abbreviations, see FFmpeg standard video rates.
rate
technically refers to the assumed or intended framerate of the input image file sequence.
This is important to mention because of the distinction between input and output framerates in FFmpeg.
See the details below on min.rate
and fps.out
to understand the differences and how to avoid some common problems.
size
is not set to "source"
, the output video is scaled.
size
can be a character string of dimensions in length by height format such as "720x480"
or an abbreviated standard such as "ntsc"
.
See FFmpeg standard video sizes for common dimensions and available abbreviations.
ultrafast
, superfast
, veryfast
, faster
, fast
, medium
, slow
, slower
, veryslow
.
Faster speed corresponds to greater file size. Slower speeds are due to greater compression.
codec
is ignored if the file name in pattern
ends with .gif
.
For other video output file types a default codec is used depending on the file extension in pattern
when codec="default"
.
These can be overridden with options like codec="h264"
, "libx264"
, "libvpx"
, "prores"
, "qtrle"
, etc.,
but the user needs to be knowledgeable regarding which codecs can be used for which output types or errors will be thrown.
format
is ignored if the file name in pattern
ends with .gif
.
The default is "yuv420p"
, which performs 4:2:0 chroma subsampling.
This pixel format can reduce video quality, but it is the default because it ensures compatibility with most media players, many of which still cannot play 4:4:4 video.
For valid alternatives, run system("ffmpeg -pix_fmts")
.
lossless
is ignored except for relevant codec
settings, e.g., h264
or libx264
.
If TRUE
, recommended preset
values are ultrafast
or veryslow
. See https://trac.ffmpeg.org/wiki/Encode/H.264
for more information.
min.rate
applies only to non-gif
video output. Video files typically have framerates of 25 fps or 30 fps or higher.
In the case of creating gifs from an image file sequence, low framerates on the order of 10 fps or lower, even 1 fps, are often desired.
If such a low framerate is desired for video file output, many media players may not be able to play, or play properly, such a video file.
For example, the popular VLC media player can have difficulties with playback of video files created with a framerate of less than 10 fps, particularly with rates closer to 1.
min.rate
sets a lower bound on the framerate of the output file.
The intended frame rate given by rate
or derived from delay
, of the input image file sequence specified in pattern
,
is still preserved in the output playback. However, if rate
is less than min.rate
, the output file will achieve min.rate
fps by duplicating frames.
For example, if rate=1
and min.rate=10
, a sequence consisting of 60 images will be converted to a 10 fps video containing 600 frames and taking the intended 60 seconds to play.
The tradeoff for compatibility with various media players is increased video file size, but depending on the codec, should not increase file size linearly,
e.g., not likely a ten times increase for the given example.
Nevertheless, control is given to the user over the video output fps lower bound via min.rate
. Just know that too low a value can cause problems.
If rate
is greater than min.rate
, the output file framerate will match the specified rate
of the input image sequence.
This also may not be desired if rate
is an atypical number for video framerates.
This matching can be overridden by specifying fps.out
as something other than rate
.
# NOT RUN { data(borders) library(dplyr) n <- 90 borders <- map(1:n, ~mutate(borders, id = .x)) %>% bind_rows() args <- list(width=300, height=300, res=300, bg="black") save_seq(borders, id="id", n.frames=n, col="white", type="maplines", file="images", png.args=args) ffmpeg(pattern="images_%04d.png", output="video.mp4", rate=10) # }