A blog about coding, run by a bunny

Me versus JavaScript


If you’ve been following me, it probably does not come to a surprise anymore that I dislike JavaScript. I hate writing code for our application that needs to rely on Javascript to work properly, and I hate dealing with this mess called a front-end where JavaScript is needed to get forms sorted out properly or validations and requirements to be checked.

For me this is a code-smell, and it makes the application vulnerable. If the application cannot work without JavaScript, something is fundamentally wrong with it’s design. But that’s a discussion I will save for later.

The Problem

I’ve been given a ticket, where I need to move the fields for all credentials to external endpoints away from our Customer page and place them on the Subscription page. On it’s own this is a logical requirement, be it not for the structure of the entities involved. I want to refactor the code so that the credentials are also properly stored on the Subscription entities and no longer on the Customer entity, as they don’t belong there. Alas that was not part of the ticket’s scope.

The issue I want to bring forward is the JavaScript I had to write in order to make the following happen:

  • You first need to select a Customer from a dropdown element.
  • Based on the selected customer, the available DataSource entities are selected and added to the second dropdown.
  • Based on the selected datasource entity, a list of Indicator elements is added to the form.
  • Based on the selected datasource entity, specific credential fields are shown
  • Based on the selected datasource entity, credit fields are enabled/disabled

Because all this data is not available from the get go, additional calls to the back-end are needed to determine what can be done and what needs to be selected/configured based on the user’s selections. So I came up with the following code…..

The JavaScript code

# When a ::Customer is selected,
# we need to retrieve all avaialable ::DataSource entities that are
# available for creating a new ::Subscription.
# This information is available in the :Admins::SubscriptionController
# and requires the ID of the customer to load the relevant information.
$('#subscription_customer_id').on('change', (event) ->
  # Read out the selected customer_id from the select2 element.
  customer_id =

  # Make the AJAX request to the backend and
  # load the available DataSources
  # using the customer ID of the selected Customer.
  # This will return an array of hashes containing 
  # the ID and Name and premium status.
  $.get "secret_domain/#{customer_id}/available_data_sources", (data) ->
    # Data received, clear the stored values first!
    window.data_sources = []

    # Clear the entire list inside the select2 element for the DataSource entities.
    # Because we want to allow the Admin to properly select his DataSource, we inject an
    # empty element immediately as well.
      .append('<option>', { value: null, text: null })

    # Loop over the datasources and store them one by one in the array
    # and add them to the select2 element as well.
    # This allows to reselect values for a new customer
    # when the selection changes.
    for data_source in data
         premium: data_source.premium

        $('<option>', {

# When a ::DataSource is selected,
# we need to retrieve the matching ::Indicators that can be selected
# for this ::DataSource.
# This is done by asking the Controller what Indicators
# can be selected based upon the customer_id and data_source_id.
# The information is returned as AJAX,
# which we will use to inject them inside form so
# the user can properly select them from the page.
$('#subscription_data_source_id').on('change', (event) ->
  # Extract the data needed from both select2 elements.
  data_source_id =
  data_source_name = null
  customer_id = $('#subscription_customer_id').val()

  # Fetch the name of the selected DataSource
  for option in
    if option.value == data_source_id
      # Strip all © symbols and replace spaces by _
      data_source_name = option.text
         .replace(/\s©/g, '')
         .replace(/\s+/g, "_")

  # Because we don't know whether we have re-selected a DataSource,
  # we are going to enable the 2 final input fields of the Form
  # to allow changes to be made.
  # These fields are optional, and don't need to be submitted,
  # which is why we use the disabled property. So let's remove it.

  # Now loop over the stored ::DataSource entities.
  # If we find the one that we have selected, mark the fields as disabled based on whether this
  # ::DataSource is a premium one or not.
  for data_source in window.data_sources
    if parseInt( == parseInt(data_source_id)
      if !data_source.premium
        $('input[name="subscription[expires_at]"]').attr('disabled', 'disabled')
        $('input[name="subscription[credit_count_limit]"]').attr('disabled', 'disabled')

  # Now we can ask the application to fetch us the ::Indicators.
  # The following information is returned:
  #  - Riskgroup ID, Riskgroup Name
  #  - Indicator ID, Name, Description.
  # We will use the AJAX response to properly populate the table and make them all selected by default.
    url: "/secret_admin_panel/#{data_source_id}/sid?c_id=#{customer_id}",
    success: (data) ->
      # Copy all the data received from the JSON response into the table.
      # We first remove all existing elements, and then rebuild the table from scratch.
      table = $('#data_source_indicators > table.table.table-striped')

      for risk_group in data.risk_groups
        # Add the table-header

        # Add the indicators below that.
        for indicator in risk_group.indicators
          row = $('<tr></tr>')
          row.append("<td><input name=\"subscription[indicator_ids][]\" checked=\"checked\" type=\"checkbox\" value=\"#{}\" /></td>")

  # The DataSource might also require credentials to be set.
  # Because we are unsure whether we need some Credentials for the DataSource or not
  # we need to look up the ID of the DataSource and fetch it's matching
  # CSS element.
  # But we hide all elements before that.
  $("#data_source_bisnode").css('display', 'none')
  $("#data_source_bureau_van_dijk").css('display', 'none')
  $("#data_source_creditsafe").css('display', 'none')
  $("#data_source_format").css('display', 'none')

  if data_source_name != null
    $("#data_source_#{data_source_name}").css('display', 'block')

For me this is hacky:

  • I need to hard-code URLs, meaning I cannot reuse the logic anywhere.
  • It’s bound to tight to the design of the pages.
  • It combines way too much data to the point separation becomes hard.


The solution for me becomes pretty clear at this point:

  1. Separate the credentials to their own encrypted entities
  2. Unify the interface for these Subscription objects to be generic
  3. Restructure the data so that each form knows that to display
  4. Get rid of the JavaScript.

Setting a TTL for Redis keys


Based on the title, you’d think this was supposed to be something simple, right? Well, I can tell you that it completely depends on the gems you are using to get all your entries properly configured with a reasonable TTL, and not fall back on the defaults of one year and infinite.


One of the first gems we use in our application is the ::Redis::Semaphore gem. We use this gem to create, as the name implies, semaphore objects in Redis to prevent the simultanious execution of background jobs in our Sidekiq framework.

The documentation of this gem makes mention of the :expiration key that can be set. They also warn that this might be dangerous because a key could expire during execution of the process. While this might be true, this luckily was a non-case in our situation.

Basically, when a key receives a TTL, the timer starts running as soon as the key is created in Redis. But the Redis documentation clearly states that the usage of GET, SET and GETSET refreshes the TTL on the key, and tests have shown this. So with the way we use the semaphore keys, our key should not expire during execution of the process. The snipper below shows how to use the setting when creating a semaphore:"our key", redis: Connection.redis_client, expiration: 7.days.to_i)

Rails Cache

The second place where we use the Redis for caching purposes is the overal Rails cache. In order to make this happen, we relied on the gem redis-rails. This game makes it possible to use Redis as caching back-end for the Rails framework. The configuration is done in your application.rb and is pretty straightforward:

# Redis Cache Configuration.
# With the new TTL settings, keys are now automatically expired after 7 days.
config.cache_store = :redis_store, Chamber[:redis][:cache], { expires_in: 7.days.to_i }
config.session_store :redis_store, redis_server: Chamber[:redis][:cache], key: Chamber[:redis][:session_key], expires_in: 1.year

Exceptional cases

Of course, it wouldn’t be a real application if there weren’t any exceptions to using the standard configuration. There’s an edge case in our application where we do not wish to expire the data stored inside Redis, but how do we do that when the global cache has now been set to 7 days?

Easy, you simply remove the TTL in the caching call. When the TTL is explicitly set to nil, it will be translated to -1 by Redis, which means indefinitely:

Rails.cache.fetch(cache_key.to_param, expires_in: nil) { yield }


This is where is gets more interesting. The geocoder gem allows you to make calls to various endpoints to have an address transformed into geo-coordinates. They way we used this gem, was that we store the returned information in Redis and have it as a faster lookup when the address matches, reducing the load on the endpoints we used.

The gem itself does not supported setting a TTL on the keys it uses. Having it configured to store everything in Redis, we wrote a small wrapper that mimics the behaviour of the Geocoder and sets the desired TTL for us:

module Geocoder
  class AutoExpireRedisCache
    # Initializes the cache using the actual store and TTL as arguments.
    # By default, keys are expired after a week.
    def initialize(store, ttl = 7.days.to_i)
      @store = store
      @ttl = ttl

    # Looks up the value using the provided URL as key.
    def [](url)

    # Store the provided value, using the URL as key.
    # The stored key is expired after the defined TTL.
    def []=(url, value)
      @store.[]=(url, value)
      @store.expire(url, @ttl)

    # Returns all keys currently used by the store.
    def keys

    # Deletes the specified key from the store.
    def del(url)


This was the hardest gem to have it follow our TTL policy for Redis keys. The gem is basically a hook for Rack to store it’s cache data inside Redis. Unfortunately we do not use this as it should be used, in that way that we never call the cache_for method in our controllers. So throughout the entire application we rely on the default settings of this gem.

This gem has a constant called ::Rack::Cache::Redis::DEFAULT_TTL which is set to 1 year. A whole year is pretty long to keep cached pages and metadata for the application. Looking through the documentation of both Redis, Rack and this game, we came across an initial solution:

config.action_dispatch.rack_cache = {
  metastore: "#{Chamber[:redis][:cache]}/metastore",
  entitystore: "#{Chamber[:redis][:cache]}/entitystore",
  default_ttl: 7.days.to_i,
  use_native_ttl: true

Unfortunately this doesn’t work at all. This ends up with the weird behaviour that everything gets stored for over a year because of the constant defined by the framework. The gem offers no configuration options to overwrite this behaviour.

So what does every good Ruby programmer do?

# We redefine the constant of the Redis Rack Cache to be 7 days instead of one year.
# Since this gem doesn't like to be configured, Oli forces it to be configured using ruby sugar.
::Redis::Rack::Cache.send(:remove_const, :DEFAULT_TTL)
::Redis::Rack::Cache.send(:const_set, :DEFAULT_TTL, 7.days.to_i)

We injected this snippet before the configuration part in our application.rb, overwriting the constant with a setting that’s more to our liking. While it’s not the most elegant solution it does get the job done:

➜  ~ redis-cli> select 1
OK[1]> INFO KeySpace
# Keyspace
db1:keys=51,expires=51,avg_ttl=28443512775[1]> TTL metastore:4453e4e41864af53bcce5e2c3dffe719a92374ae
(integer) 68400 # 7 days.

Of course this requires you to initially flush all keys when these settings are applied, but we achieved our desired goal : Keep the memory footprint of Redis in line.

Testing CSRF and custom headers


based on my previous article, we did run into some troubles due the way Warden, Devise and Rails handle custom HTTP Headers that we did not anticipate. After digging deep into the bowels of the frameworks, we finally came to a solution that seems to be working for us and is also testable, or so we think at least.

What changes did we make?

The first, and most important change we had to make was the usage of the headers. We originally relied on the HTTP_AUTHORIZATION header being supplied from clients that wanted to use our application as an API.

Devise and Warden rely on this header for supporting BasicAuth, as does Rack. Add into the mix that Rack prefixes all custom headers with HTTP_, we ran into some weird situations with our clients.

So, to keep using the headers, and making sure we had no conflict with the existing frameworks, I decided to prefix them with our company name as well to create a unique namespace.  ( Who would have known that there’s best practises, right )

So from now on we use custom HTTP_RISKMETHODS_ headers for transmiting our information from the mobile companion applications. Of course we wanted to make sure that this was actually working, so we were looking to write some tests with RSpec for this to capture the behaviour.

RSpec, custom headers and Rack

Now if you thought that figuring out this entire CSRF stack was difficult, wait till you’re actually wanting to test this using RSpec….

The initial problem is that none of the specs actually support CSRF. By default, this behaviour is disabled in the test-environment because you would have to make GET requests first to get the token, and then inject it in every POST request you would make in all your controller related tests. Sufficiently to say, that’s quite the hassle to simply test controller logic.

However, the tests we wanted to write HAD to simulate this behaviour. We explicitly depend on the CSRF check to fail because everything is being submited/requested from a third-party and not the Rails application.

So the first step in our test suite:

before(:each) do
  ActionController::Base.allow_forgery_protection = true

after(:each) do
  ActionController::Base.allow_forgery_protection = false

The above snippet enables and disables the CSRF checks for every test. That way we can be sure the setting does not affect any of the other tests we have. If you want to see the consequence of this, simply enable it one of your specs and observe how they suddenly all start failing.

Feature test with Capybara

The first test I wrote was a complete feature spec in RSpec to simulate the high-level usage of the mobile companion applications. The problem however is that these tests are driven by Capybara, and sending out custom headers with Capybara is actually quite a pain.

What I did was create different drivers for the various test-cases and simply switch to them during each test to make sure that the request would properly simulate the behaviour I wanted to test:

::Capybara.register_driver(:apple_tv_driver) do |app|
  timestamp =
    headers: {
      AUTH_HEADER => generate_secret_hash,
      TIMESTAMP_HEADER => timestamp,
      'User-Agent' => 'Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)',
      'Content-Type' => 'application/json'

::Capybara.register_driver(:apple_tv_driver_missing_auth) do |app|
  timestamp =
    headers: {
      TIMESTAMP_HEADER => timestamp,
      'User-Agent' => 'Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)',
      'Content-Type' => 'application/json'

::Capybara.register_driver(:apple_tv_driver_missing_timestamp) do |app|
  timestamp =
    headers: {
      AUTH_JEADER => Digest::MD5.hexdigest("rarodefault@riskmethods.nettonga#{timestamp}"),
      'User-Agent' => 'Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)',
      'Content-Type' => 'application/json'

With these drivers in place, our tests started to look like this:

context 'Missing Headers' do
  context 'AUTH header is missing' do
    it 'prevents requests being made to the application' do
      ::Capybara.current_driver = :apple_tv_driver_missing_auth

      # Authenticate first, to get the cookie set properly
      ::Capybara.current_session.driver.submit(:post, '/users/login.json', parameters)

      # This should have failed
      expect(page.status_code).to eq(403)

  context 'TIMESTAMP header is missing' do
    it 'prevents requests being made to the application' do
      ::Capybara.current_driver = :apple_tv_driver_missing_timestamp

      # Authenticate first, to get the cookie set properly
      ::Capybara.current_session.driver.submit(:post, '/users/login.json', parameters)

      # This should have failed
      expect(page.status_code).to eq(403)

Now we have a feature spec that simulates a third-party client by submitting custom headers using the Capybara driver.

Request specs with Rack::Test

The second test we wanted was a RSpec Request test. These are a different kind of high-level testing that is available within the RSpec suite. However, I ran into so much problems with this kind of test that I had to manually include Rack::Test::Methods to get the functionality I was looking for:

def login
  post('/users/login.json', login_params.to_json)
  expect(last_response.status).to eq(201)
  expect(rack_mock_session.cookie_jar['_riskmethods_session']).to be_present

# Performs a Logout request against the application using the Riskmethods TV Headers
def logout


  expect(last_response.status).to eq(204)

# Sets the headers needed for a request using the Rack::Test implementation.
# The feature attribute can be used to set custom scenario's for headers.
def inject_request_headers(feature: :default)
  headers.each { |header, value| header(header, value) } if feature.eql?(:default)
  headers.merge('VERSION' => '2').each { |header, value| header(header, value) } if feature.eql?(:wrong_version)

These three methods use the methods defined by Rack::Test to perform a login request, a logout request and inject the custom headers any third client needs to submit. As you can see, they are completely different from standard RSpec tests due the methods being used and the hacks to get the setup I want.

Important: Testing like this is strange. Rack::Test automatically pefixes any custom header with HTTP_, so make sure you define the right headers!

Our specs themselves now looked like this:

it 'returns a valid response when the Apple TV headers are present' do


  expect(last_response.status).to eq(200)

it 'returns a forbidden when the Apple TV headers are incorrect' do
  inject_request_headers(feature: :wrong_version)


  expect(last_response.status).to eq(403)


I’ve had to make so many changes to the way that the application works, it becomes hard to keep track of what proper CSRF protection actually means.

In a short summary:

  • protect_from_forgery enables CSRF protection. Rails recommends to disable this for API controllers, but we infortunately do not yet have dedicated API controllers, so we keep it enabled.
  • Every POST, PUT or DELETE that does not include the authenticity tokens generated during a GET request, will trigger the handle_unverified_request
  • Devise overwrite handle_unverified_request as well!
  • Devise has an around_action that allows authentication to succeed when posting parameters to the controller.
  • Devise::SessionsController does inherit from your ApplicationController by default. Careful what you define there.
  • Do not use standard HTTP headers for custom behaviour. Warden doesn’t play nice if you do.

I hope this piece of information sends people on the right path when having to deal with CSRF, API requests and RSpec.

CSFR, Devise and Rails


I’m sure everyone has head about CSFR before, or at least uses the protect_from_forgery method inside his Ruby on Rails application. But what if you want to make an API?

Rails says it’s best to disable this feature when building API’s as they are supposed to be stateless. This is all nice and dandy in theory, but many practical examples require some state, or a validation of authentication, or the data representation needs to be bound or scoped to the user making the request. In our case, it was the latter.

So we decided that we enable protect_from_forgery on our API controllers as well. Now what does this actually cause?

First let’s look at the documentation. The documentation gives a good explanation on what the consequences are of enabling this attribute on your controllers:

  • Every request contains a token that needs to be supplied back on a POST.
  • This token changes with every request.
  • This token is stored in the session.
  • API’s or JavaScript calls usually have no clue about this token.

So when you enable this feature, and you have a mobile application like we do, all your calls to the website suddenly fail because the CSFR token is missing. So how do we bypass this problem?

Rails uses the handle_unverified_request method to allow you a final check or attempt to bypass this problem. Except there is one problem when using Devise, this method get’s overwritten by them as well….

To make things worse, not all of our Controllers are DeviseControllers. If you run a test with the Devise SessionController, you can see that there are actually no problems in using that Controller as part of your API. The reason is this:

class Devise::SessionsController < DeviseController
  prepend_before_action :require_no_authentication, only: [:new, :create]
  prepend_before_action :allow_params_authentication!, only: :create


This is the source of the Devise::SessionController, as you can see they actually remove their authentication requirements and allow parameter authentication on the controller. I did not dive into the code in detail, but my assumption is that they also disable the CSRF check with this.

So what did we initially implement?

We started with overwriting the handle_unverified_request and had some functions that determined whether a request was from a mobile client or not. If we detected this, then we returned true and did not call the super method of Rails/Devise to block the request.

But there was a problem that we actually didn’t see with this. When I wrote more specs, I discovered that whether we checked the headers for authentication was irrelevant. Once you were authenticated with the system and had your session cookie, you could dive into any resource and regardless of the headers being submitted, Devise would allow the request and return the request resource.

Consequence:  Anyone hijacking a valid session would be able to access the resources, regardless if they had the correct headers in place. Just having the headers present would be sufficient to bypass Devise.

How do you fix this?

The fix itself is pretty straight forward. The problem was that we assumed every request was being handled by handle_unverified_request. And this is simply not the case.

When reading the documentation, it clearly states the following:

All requests are checked except GET requests as these should be idempotent. Keep in mind that all session-oriented requests should be CSRF protected, including JavaScript and HTML requests.

So only POST, PUT and DELETE requests are covered by this method. Which explains why every single GET request was going through. Our functions first of all simply returned true/false and did not stop the request chain. And secondly they were never called!

So how does one make his controllers API compatible while keeping Devise and the CSRF checks in place?

# frozen_string_literal: true
class ApplicationController < ActionController::Base
  # Filters
  before_action :validate_mobile_access, if: :mobile_access? # Verify mobile requests

  # Is a request considered a mobile request?
  def mobile_access?

  # Is the authentication data present and valid?
  # Abort the request if it isn't
  def validate_mobile_access
    access_hash = request.headers['MY_SECRET_HEADER']

    head(:forbidden) and return unless ::Authenticator.auth(access_hash)

  # Overwrite the behaviour for mobile requests.
  def handle_unverified_request
    return if mobile_access?

The above is cleaned snippet of how we tackled the problem. Of course this is just a simple pseudo snippet to not reveal our authentication mechanics, but it illustrates briefly what is required to cover all types of requests when you use Devise and Ruby on Rails:

  • Is your request a GET?
    • Then validate_mobile_access is triggered when the headers are present
      • If the headers are correct the request goes through and followes Devise auth logic
      • If the headers are incorrect we return a FORBIDDEN response
    • If the chain is not halted here, Devise and/or CanCan or other mechanics kick in.
  • Is your request a POST variant?
    • Then first handle_unverified_request kicks in cause the CSRF token is missing
    • If we detect it to be a valid mobile request, we stop the method and don’t call super.
      • Pass the request down the chain, and now it followes the same path as GET
    • If we detect it’s not a mobile request, we call super and let Rails handle it.

But how do I handle authentication requests then? Easy, if you look up at the beginning of the post, you will see that the Devise::SessionsController disables these requests and it will work out of the box.

Hope this sheds some light on the whole complex topic and puts people on the right path when they have to deal with this kind of mess.

Upgrading Sprocket-Rails GEM

Sprockets, and obviously the sprockets-rails gem received new versions. This means for us here at work that we need to upgrade the gems and make sure the application is still working with the changes.

For most gems, the upgrade procedure would be rather simple:

  1. Update your Gemfile
  2. Run the bundler update command
  3. Runs the specs and deploy when it all passes.

For us, the upgrade was not so simple…
After upgrading the gems, I ran our RSpec suite and was greeted by the following errors all of a sudden:

  8) Admins::WhiteListedIpsController#new authorized properly renders the view
     Failure/Error: <%= javascript_include_tag :application, :admin, :map %>
       The asset "application.js" is not present in the asset pipeline.

Running the application locally in development mode produced no errors whatsoever. Every page, function, call and plugin was properly working under the new Sprockets version, yet when running the test suite with RSpec it suddenly broke.

My first assumption was that there was a conflict with RSpec and the new Sprockets, but that was soon dismissed as there nothing obvious pointing in that direction, and running the suite multiple times produces errors for different assets missing, depending on which spec was running first.

So what was the problem?

In an older version of Sprockets, it was a bad thing to have the compile environment enabled during tests. So in our config/environments/test.rb we had the following line of code written:

# Sprockets whines if this is enabled in tests
config.assets.compile = false

Letting my brain go back in time, I tried to remember where this came from, and when I checked the git logs, we had this enabled for a very old version of Sprockets. Now with the recent upgrade, this is no longer required at all, and because we have run into problems in the past with missing assets, we actually changed our approach:

  • We want Sprockets to tell us when something is not in the pipeline, regardless of the env
  • We want tests to detect problems for production, so the env should be the same
  • We use an initializer to load all assets properly

The solution: We removed that one line from the environment initializer, and everything worked like a charm again.

How did I figure this out?

I actually spent an hour or so looking thorugh all the various settings and playing around with the initializers as well as some configuration changes. Nothing really produced anything that seemed like it would solve our problem. Often I received the error that an asset was not in the pipeline, despite being explicitly written inside the initializer.

A lightbulb began to burn when one of the errors complained about the application.js not being in the pipeline. That was something not really possible, as this is loaded by default in the Rails framework.

So after reading the documentation of sprocket-rails once more:


Enables Sprockets compile environment. If disabled, Rails.application.assets will be nil to prevent inadvertent compilation calls. View helpers will depend on assets being precompiled to public/assets in order to link to them. Initializers expecting Rails.application.assets during boot should be accessing the environment in a config.assets.configure block. See below.


Invokes block with environment when the environment is initialized. Allows direct access to the environment instance and lets you lazily load libraries only needed for asset compiling.

config.assets.configure do |env|
  env.js_compressor  = :uglifier # or :closure, :yui
  env.css_compressor = :sass   # or :yui

  require 'my_processor'
  env.register_preprocessor 'application/javascript', MyProcessor

  env.logger = Rails.logger

It all became clear:  Our compile was disabled for some reason, and looking at the environment file our test environment revealed the culprit.

But what about the images?

Well, they seem to be broken right now. Shortly after writing this post, I found a problem with images inside RSpec and the test environment not being loaded. I’ve opened a bug report with sprockets-rails:

Basically, what I did to make it work again is add the following snippet for now in our assets initializer:

# HACK: Add all images from our asset folders
# For some reason, Sprockets is not loading the images by default into the path.
# This hack adds all .png and .gif images to the initializer so they are part of the pipeline.
# Bug is opened to address this issue:
%w(app vendor lib).each do |folder|
  folder_path = Rails.root.join(folder, 'assets', 'images')

  %w(*.png *.gif).each do |extension|
    full_path = File.join(folder_path, '**', extension)

    Dir.glob(full_path).each do |entry|
      if entry.end_with?(extension.gsub('*', '')) # Kick out . entries
        Rails.application.config.assets.precompile << entry.gsub(folder_path.to_s + '/', '')

As soon as I receive feedback from the gem maintainers, I will alter our code to get rid of the problem.

First blog post

Blog under construction.

As I just returned from my business trip for the Euruko 2016, I just threw this online rather quick. Over the coming days I will tune this blog and write my first article based upon a problem I’m currently dealing with at work.

Featured post

Create a free website or blog at

Up ↑