How to use Delayed Job to handle your Carrierwave processing
This tutorial builds on my previous post about how to add FFMPEG processing to Carrierwave. Here I will show you my attempt at being able to utilize Delayed::Job to do the heavy lifting of processing when uploading files using Carrierwave. Remember, this could probably use some improvement, but it is a great starting point. So lets begin. The first thing you will need to do is add Delayed::Job to your application:
# Gemfile gem "delayed_job"
Next you need to create the migration and migrate the database:
rails generate delayed_job rake db:migrate
Now we get to the good part. Lets create a module to include into Carrierwave that will support holding off on doing the processing until Delayed::Job gets around to it:
# lib/carrier_wave/delayed_job.rb module CarrierWave module Delayed module Job module ActiveRecordInterface def delay_carrierwave @delay_carrierwave ||= true end def delay_carrierwave=(delay) @delay_carrierwave = delay end def perform asset_name = self.class.uploader_options.keys.first self.send(asset_name).versions.each_pair do |key, value| value.process_without_delay! end end private def enqueue ::Delayed::Job.enqueue self end end def self.included(base) base.extend ClassMethods end module ClassMethods def self.extended(base) base.send(:include, InstanceMethods) base.alias_method_chain :process!, :delay ::ActiveRecord::Base.send( :include, CarrierWave::Delayed::Job::ActiveRecordInterface ) end module InstanceMethods def process_with_delay!(new_file) process_without_delay!(new_file) unless model.delay_carrierwave end end end end end end
Awesome! Now we need to tie this into our Uploader:
# app/uploaders/asset_uploader.rb require File.join(Rails.root, "lib", "carrier_wave", "ffmpeg") require File.join(Rails.root, "lib", "carrier_wave", "delayed_job") # New class AssetUploader < CarrierWave::Uploader::Base include CarrierWave::Delayed::Job # New include CarrierWave::FFMPEG # Choose what kind of storage to use for this uploader: storage :file # Override the directory where uploaded files will be stored. # This is a sensible default for uploaders that are meant to be mounted: def store_dir "#{Rails.root}/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end # Add a version, utilizing our processor version :bitrate_128k do process :resample => "128k" end end
The last thing we have to do is update our model to queue up delayed job:
# app/models/asset.rb class Asset < ActiveRecord::Base mount_uploader :asset, AssetUploader after_save :enqueue # New end
There you have it. Now when you create a new Asset, associate a file, and save it, it shouldn’t run the processes, but instead create a Delayed::Job record. Then Delayed::Job should pick it up and run the processors on it. This may not be perfect, but at least its a start! Thanks for reading!
Great article. I assume it’s not a huge step to add something like S3 as the temporary file store and then you can run lots of different workers. Although I’d be tempted to use a service that can do the encoding for me this is really interesting.
A nice addition would be able to specify a certain size that will be rendered immediately and then the rest are processed in the background. This is very image specific, but when using a dynamic uploader, e.g. uploadify, it’s nice to provide some for of immediate feedback to the user. Its a good balance of making the request fast and providing a good user experience. I’ve hacked together stuff to do this with paperclip before, but never been happy with the implementation
brian: that would be a good idea, and you are right, very image specific. I haven’t actually tested it out, but the original image uploaded should be available right away, so perhaps doing some client size resizing temporarily would satisfy?
Randy
Interesting post. Like the comprehensive code inclusion.
I’m at Appoxy and we’re have a cloud worker service called SimpleWorker (simpleworker.com). It’s in public beta and so I’d love for you to try it out with this type of job and let me know what you think.
(You create worker classes in /app/workers and then queue or schedule them with a .queue or .schedule command.)
We’d love to be able to include a worker example for Carrierwave and FFMPEG. You get 5 hours of compute time free but we can let you have more if you need it. Happy to help if you’d like.
Kind regards,
Ken
Hi, I’m trying out this solution but I’m seeing following behaviour:
The job is enqueued and all the versions are stored immediatedly with an exact copy of the original file.
When the job is ran those copies are replaced with the actual processed versions of the image.
This is bad behaviour to me, when including tiny thumbnail versions you get the original version in full size. Also, I intended relying on this to avoid having in-request delays from the Heroku instances to the external storage (S3 or whatever).
Any hints or feedback appreciated. I will report on my progress.
Cheers!
Alberto, I am not sure I follow exactly. Its possibly this process may only really work with transcoding videos and not image modification. There may be further improvements to it that need to be made. This is just intended as a starting point.
You have a small typo in the include statement in the uploader class. It should be:
include CarrierWave::Delayed::Job
Is the example a working piece of code?
I see that the instance method “process_with_delay” calls “process_without_delay” which is nowhere defined
Bob – Yes thank you. Updated..
Ginie – If I remember correctly, process_without_delay is included in Delayed::Job. You can see here defined here https://github.com/collecti… in the handle_asynchronously Class method.