Search

Codingbunny

A blog about coding, run by a bunny

Category

Ruby

Any related to the Ruby programming language will end up in this category.

Load-balancer requests in the application

Load-balancers.

Almost every platform uses them these days to distribute the incoming requests between applications, to make sure one node or cluster does not get overloaded and that all users experience an acceptable performance.

The downside of these load-balancers however is that they need to send out some kind of keep-alive messages to the back-end, in order to verify whether the various endpoints they are balancing are still reachable. In our case, this was done by triggering a specific URL and check if we get an HTTP 200 response.

Now without disclosing too much of the internals, we track every web-request, rely on sessions and don’t want to pollute our database with information that’s not relevant to the system. So we originally had the idea to filter out all load-balancer requests from the controllers. Sounds perfectly okay so far, right? Except code wise this was a nightmare. We had a bunch of filters, an API to support and sessions to clean up. This resulted in the ApplicationController becoming a mess of hooks, cleanup code and basically undoing everything Rails did when a new request came in.

When another change-request came in to filter out session data from Redis for the load-balancer, I started asking myself the question: Do we even need to hit the Rails stack in order to know our application is up and running? The answer to that, is no. You don’t need to hit Rails to know whether your application is up and running.

Rails has this feature called Rack Middleware. Creating a custom middleware that responds simply to the load-balancer requests, we created a simple solution that returns an empty HTTP 200 response for the load-balancer, but processes all other requests further up the stack.

  • The Rails stack is never called. This means no cleaning of sessions, data or implementing dozen’s of hooks and if-statements to filter out load-balancer requests.
  • The middleware runs before everything else, so we don’t even need to load all code or triggers
  • The load is removed from the database and Redis system as the calls are never made
  • A lot of the code is reduced to a single module.
module Middleware
  class LoadBalancer
    def initialize(app)
      @app = app
    end

    def call(env)
      params = { url: env['REQUEST_URI'], ip: env['REMOTE_ADDR'] }
      return blank_response if LoadBalancer.skip_web_request?(params)
      @app.call(env)
    end

    def blank_response
      [200, { 'Content-Type' => 'text/html', 'Content-Type' => 0 }, ['']]
    end

    class << self
      def skip_web_request?(url:, ip:)
        skip_web_ips.include?(ip) && skip_web_urls.include?(url)
      end

      def skip_web_ips
        Settings.proxy.trusted_ips
      end

      def skip_web_urls
        ['http://our_fancy_balance_url']
      end
    end
  end
end

So basically what this class does it return an empty HTTP 200 response if the request matches our Load-Balancer request or delegate it up the stack in all other cases.

To get the load-balancer to work before everything else in your stack, you need to create an initializer for Rails and add the following code:

Rails.application.config.middleware.insert(0, ::Middleware::LoadBalancer)

 

Advertisements

Extracting functionality to a Service

Aether,

The name I’ve given to my new service that I am developing at work. The goal is a complete data and logic isolation of one of our features, extracting it from the monolithic web-application we currently have running for our customers.

And while the idea in itself is amazing, the actual implementation is not so easy. The goal is to have a service that crawls the web for data, transforms it and stores it inside our system using the correct configuration and structure. The problems however are:

  • The core is so tightly coupled, that I’m not even sure I can start on this without some preliminary refactoring.
  • There is a data-dependency in both systems, where I do not see an easy separation.
  • There’s a constant need for synchronising data between the systems, which makes me wonder if there’s really a need to extract this.
  • Once we start on this, we need to drive it through completely.

Our Product Owner sees the issues and supports that we need to clean up our technical dept regarding this feature. The business however has requirements that need to be fulfilled as well. Doing both at the same time might be possible I suppose, but do we want that?

  1. We could say that all feature development for this topic is on hold, and the new service needs to be finished first. (We all know how “finished” software is)
  2. We port over the established components and keep developing features in the app, porting them later  (refactor hell)
  3. We put the service on hold and finish the new features first (like that’s ever going to happen)

All three approaches have their benefits and downsides. The biggest problem I have right now is that I see no clean technical solution at the moment on the sync need between the both of them, and how we can proceed with this in an orderly manner.

I’ve tossed the problem in front of the team, and I’m waiting on feedback at the moment, but my mind is wandering, and a little voice says to keep everything in the monolithic application right now and go for code-isolation first, and data-isolation later on.

Lint/ScriptPermissions for Rakefile?

Today I wanted to activate this rule. The cop for this is actually straight forward, you don’t want to add/write script permissions inside your scripts. We have a Rakefile in our application, as any good standard Ruby on Rails application has. And at the top of the file, this is written:  #!/usr/bin/env rake

There is a problem with adding this, namely that your system will use whatever rake is defined, and not necessarily the one you had in mind or configured through your Gemfile or Bundler. That’s why it’s better to NOT include these lines in your script, and make sure the script is actually executable when it needs to be, or you’re using the right commands to do so like bundle exec rake my_script.

Now why doesn’t it make sense to add this as well? The cop kind of gives it away. The error message is that the file doesn’t have any execute permissions, which honestly it should not have in the first place. Rake is running and needs to be executable, but your Rakefile is the script being parsed by Rake. At least that’s how I understand it. I’ve never had to set execute permissions on this file before.

The current Ruby on Rails master branch doesn’t have this entry in it either, perhaps older versions used to have this, but seeing as this is not required anymore, I opted to activate this rule AND remove the line from our Rakefile​.

Binary Gap

So,

Today I was given a coding challenge. Or rather I took one voluntarily to prepare myself for the future.  I know there are companies that require candidates to perform a technical test when applying, and there’s that don’t.

I understand the requirement. A good interview includes a coding test, because you want to see your candidate think and what kind of code he writes to solve a particular problem. I personally don’t care if a candidate did not get the correct solution during the test. You simply cannot know everything, and I’m usually more interested in how someone approaches a problem and comes to a solution.

But back to the test: I was asked to figure out the binary gap for any given number. The binary gap is the largest repeating amount of 0 in the binary notation of that given number. So for example, the binary gap for 15 is 0, because 15 in binary is 1111. For 16 however it’s 4, as 16 in binary is 10000. I hope you understand the question.

# Author: Arne De Herdt
#
# Calculates the binary gap for any given number.
# The binary gap is the largest repeating amount of 0
# in the binary notation of a given number.
#
# For example:
# n = 1041=10000010001_2 => 5
# n = 16 = 10000 => 4
#
def binary_gap(n)
  highest_count = 0
  binary_notation = "%b" % n
  counter = 0

  binary_notation.each_char do |character|
    if character == "1"
      highest_count = counter if counter >= highest_count
      counter = 0
    else
      counter += 1
    end
  end

  return highest_count
end

Is this code covering all cases? Yes, it does. It scored 100% on the Codility website, where I got it from. Is this code I want to be running in a production environment? Most likely not. This code is slow and consumes memory. Classic O(n) space problem where the memory depends on the input.

Can I write production code? Probably. Given a few iterations and improvements I can come up with some magical code in Ruby that does it faster and better.  But is that something I want to show off with on these tests? Rather not. This code is way easier to explain for a thinking process, and gives me a chance to also address the issues with it, showing my understanding of programming patterns and designs.

Feel free to leave comments on how you’d approach this, or if you disagree with me or not.

Precedence of Operations in Ruby

Okay,

Today I fixed a weird bug, that I cannot even fully explain. We rely on CanCanCan to handle authorization of actions in our Ruby on Rails application. When something is not allowed, or the user is simply not logged in, we want to redirect the user back to the root, or to the login page of the application, depending on the situation.

Instead of having to write all of this in every controller, we created a small module to do the work for us, and this is included in the ApplicationController:

def permission_denied(*)
  session[:"#{permission_denied_resource}_permission_denied"] = true
  session[:"#{permission_denied_resource}_return_to"] = request.url
  response.headers['X-Permission-Denied'] = true if Rails.env.test?

  flash[:alert] = I18n.t('devise.failure.access_denied') if current_user

  respond_with do |f|
    f.html do
      redirect_to permission_denied_path and return unless request.xhr?
      render json: flash, status: :unauthorized
    end
    f.js do
      flash.discard
      render json: flash, status: :unauthorized
    end
    f.json do
      flash.discard
      render json: flash, status: :unauthorized
    end
  end
end

Now in certain cases, especially with links opened from Office documents, this would result in a double render error. The line causing the problem being this one:

redirect_to permission_denied_path and return unless request.xhr?

According to the precedence of Ruby this does not make much sense. an unless-modifier has a lower precedence than the and operation. My understanding is that the redirect_to and return should be executed together and then the unless statement is verified to actually do it. Or that is what is intended. But that didn’t match with the error that was happening.

When I wrote the code as follows, this problem disappeared:

f.html do
  if request.xhr?
    render json: flash, status: :unauthorized
  else
    redirect_to(permission_denied_path) and return
  end
end

The behaviour is exactly the same as we wanted to achieve with the unless modifier. However this does not raise and double rendering errors when clicking a link from a Word document.

Lesson Learned:  When combining multiple statements, it’s often better to write them out the long way and not cram it all together into a single line.

Railsconf 2017

Been too long since I wrote an article about coding related stuff, so might as well write about the the trip to Phoenix, Arizona and attending the Railsconf 2017!

For me it was the first time that I actually attended the Railsconf. I’ve always been going to Euruko instead, which is hosted in a different country each year across Europe. Adding to my first attendance for Railsconf, it was also the very first time that I visited the US. Honestly, this has changed my view on the US completely, in a positive way, as I had a very pleasant experience in Phoenix.

Down to the points!

Form Objects

I’m not really a fan of them. Mostly because of the way they have been (ab)used in our application really. I understand the point behind them, and I can even identify cases in our application where using a form object makes sense. Especially when the page is displaying a compound of 3 to 4 entities into a single one. A Form object really shines at places like this.

At the Railsconf, there was a talk about Form objects using Reform. I do not like Reform. I have a big problem with Reform on how they even implement certain things, and their DSL is honestly a complete mess. Combine this with the issues they have every upgrade as well as their clear approach from stepping away from Rails, I spend more time making Reform work inside our Rails application then actually adding useful functionality.

For me a Form Object should be written simple. Do it on your own, but be on your own. You can include the various methods for ActiveRecord and ActiveSupport so that you have validations and can mimic the behaviour of a basic model.

Strong Parameters

Many people, including a large part of the team I work in seems to have a major issue with Strong Parameters in Rails. And honestly I don’t understand why. I find the usage of these parameters rather straightforward and simple, as long as you keep your forms straightforward and simple.

Things get complicated when you are using dynamic parameters or deeply nested structures on your forms. But really, are you going the right way then with such structures?

Rails 5.1

Yup, it’s released. Announced by Aaron Patterson during the final keynote. (Which was simply hilarious to follow). I’m still in the process of upgrading our application to 5.0, which honestly is a journey in itself:

  • Implementing the new standards
  • Getting rid of all deprecation warnings
  • Dealing with the breaking changes
  • Dealing with the undocumented changes (I’m looking at your hash operations)
  • Making sure it all works

I think I’m almost done with it now, and then can progress to the next version, but seriously, updating a massive application that runs in production to the next Rails version is always a big thing.

Convention over Configuration

I’m really happy this got hammered once more at the conference. Many people seem to forget that the goal of Ruby on Rails is “programmer happiness”. We don’t need boilerplate code, or dozens of rules and regulations to program something out.

We want a convention, a guideline on how it works and people sticking to it. A happy programmer is a productive programmer.

 

Well, that’s all for now.
Just wanted to write this out, and hopefully soon I can write a dedicated article about upgrading from Rails 4 to Rails 5, and all the quirks you need to keep in mind while jumping through the different hoops.

Refactoring Code: Dynamic functions

Okay,

We have a factory that is responsible for creating certain class instances to run background jobs. This class was originally 650 lines long, doing nothing more then for every background entity defining the function my_background_class_command.

for……every……single….class inside our CommandEntity namespace….

Refactoring

Rails has this dogma called Convention over Configuration. I wanted to implement something similar because this factory was nothing more than creating a method for every entity we support.

  • Convention over Configuration
  • Filenames need to match the class they define
  • The entities all reside in their own namespace/folder

Using those three rules, I created a single class method for our factory that checks the entire folder structure, determines whether the class is a correct type and defines the required methods on our factory:

module CommandEntity
  class Factory
    include Singleton

    # Constants
    PATH = 'path/to_my/command_entity/'
    DIRECTORY = Rails.root.join(PATH)
    EXCLUDED = [
      DIRECTORY.join('my_module.rb'),
      DIRECTORY.join('my_second_module.rb'),
      DIRECTORY.join('some_error_class.rb'),
      DIRECTORY.join('factory.rb'),
      DIRECTORY.join('magical_helper.rb'),
      DIRECTORY.join('magical_second_helper.rb'),
      DIRECTORY.join('database_magic.rb'),
      DIRECTORY.join('serializers/json.rb')
    ].freeze

    class << self
      def command_factory
        Dir[DIRECTORY.join('**/*.rb')].entries.each do |file_name|
          next if EXCLUDED.include?(Pathname.new(file_name))

          chunks = file_name.gsub(Rails.root.to_s, '').gsub('.rb', '').gsub(PATH, '')[1..-1]
          method_name = chunks.gsub(::File::SEPARATOR, '_')
          structure = chunks.split(::File::SEPARATOR).reject(&:blank?)

          define_method("#{method_name}_command") do |options = {}|
            "::CommandEntity::#{structure.map(&:camelcase).join('::')}".constantize.new(options)
          end
        end
      end
    end

    command_factory
  end
end

So what exactly are we doing here right now:

  1. We loop over every entry inside the folder that holds our entities
  2. If it’s something to ignore, we move on to the next entity
  3. The filename has everything we need, so throw out the useless parts
  4. Build the method name
  5. Tell the method to dynamically create the class and instantiate it

The result: Reduced a class that was 650 lines long to a mere 50 while retaining all functionality.

The kicker: We don’t even need the class cause all we do is some_class.new(options) for which we really don’t even need a factory in the first place…..

Me versus JavaScript

Okay,

If you’ve been following me, it probably does not come to a surprise anymore that I dislike JavaScript. I hate writing code for our application that needs to rely on Javascript to work properly, and I hate dealing with this mess called a front-end where JavaScript is needed to get forms sorted out properly or validations and requirements to be checked.

For me this is a code-smell, and it makes the application vulnerable. If the application cannot work without JavaScript, something is fundamentally wrong with it’s design. But that’s a discussion I will save for later.

The Problem

I’ve been given a ticket, where I need to move the fields for all credentials to external endpoints away from our Customer page and place them on the Subscription page. On it’s own this is a logical requirement, be it not for the structure of the entities involved. I want to refactor the code so that the credentials are also properly stored on the Subscription entities and no longer on the Customer entity, as they don’t belong there. Alas that was not part of the ticket’s scope.

The issue I want to bring forward is the JavaScript I had to write in order to make the following happen:

  • You first need to select a Customer from a dropdown element.
  • Based on the selected customer, the available DataSource entities are selected and added to the second dropdown.
  • Based on the selected datasource entity, a list of Indicator elements is added to the form.
  • Based on the selected datasource entity, specific credential fields are shown
  • Based on the selected datasource entity, credit fields are enabled/disabled

Because all this data is not available from the get go, additional calls to the back-end are needed to determine what can be done and what needs to be selected/configured based on the user’s selections. So I came up with the following code…..

The JavaScript code

# When a ::Customer is selected,
# we need to retrieve all avaialable ::DataSource entities that are
# available for creating a new ::Subscription.
# This information is available in the :Admins::SubscriptionController
# and requires the ID of the customer to load the relevant information.
$('#subscription_customer_id').on('change', (event) ->
  # Read out the selected customer_id from the select2 element.
  customer_id = event.target.value

  # Make the AJAX request to the backend and
  # load the available DataSources
  # using the customer ID of the selected Customer.
  # This will return an array of hashes containing 
  # the ID and Name and premium status.
  $.get "secret_domain/#{customer_id}/available_data_sources", (data) ->
    # Data received, clear the stored values first!
    window.data_sources = []

    # Clear the entire list inside the select2 element for the DataSource entities.
    # Because we want to allow the Admin to properly select his DataSource, we inject an
    # empty element immediately as well.
    $('#subscription_data_source_id')
      .empty()
      .append('<option>', { value: null, text: null })

    # Loop over the datasources and store them one by one in the array
    # and add them to the select2 element as well.
    # This allows to reselect values for a new customer
    # when the selection changes.
    for data_source in data
      window.data_sources.push(
       {
         id: data_source.id,
         name: data_source.name,
         premium: data_source.premium
       }
      )

      $('#subscription_data_source_id').append(
        $('<option>', {
          value: data_source.id,
          text: data_source.name
        }))
)

# When a ::DataSource is selected,
# we need to retrieve the matching ::Indicators that can be selected
# for this ::DataSource.
# This is done by asking the Controller what Indicators
# can be selected based upon the customer_id and data_source_id.
# The information is returned as AJAX,
# which we will use to inject them inside form so
# the user can properly select them from the page.
$('#subscription_data_source_id').on('change', (event) ->
  # Extract the data needed from both select2 elements.
  data_source_id = event.target.value
  data_source_name = null
  customer_id = $('#subscription_customer_id').val()

  # Fetch the name of the selected DataSource
  for option in event.target.options
    if option.value == data_source_id
      # Strip all © symbols and replace spaces by _
      data_source_name = option.text
         .replace(/\s©/g, '')
         .replace(/\s+/g, "_")
         .toLowerCase()

  # Because we don't know whether we have re-selected a DataSource,
  # we are going to enable the 2 final input fields of the Form
  # to allow changes to be made.
  # These fields are optional, and don't need to be submitted,
  # which is why we use the disabled property. So let's remove it.
  $('input[name="subscription[expires_at]"]').removeAttr('disabled')
  $('input[name="subscription[credit_count_limit]"]').removeAttr('disabled')

  # Now loop over the stored ::DataSource entities.
  # If we find the one that we have selected, mark the fields as disabled based on whether this
  # ::DataSource is a premium one or not.
  for data_source in window.data_sources
    if parseInt(data_source.id) == parseInt(data_source_id)
      if !data_source.premium
        $('input[name="subscription[expires_at]"]').attr('disabled', 'disabled')
        $('input[name="subscription[credit_count_limit]"]').attr('disabled', 'disabled')
        break

  # Now we can ask the application to fetch us the ::Indicators.
  # The following information is returned:
  #  - Riskgroup ID, Riskgroup Name
  #  - Indicator ID, Name, Description.
  # We will use the AJAX response to properly populate the table and make them all selected by default.
  $.ajax(RM.Utils.prepareAjaxRequest('get',
    url: "/secret_admin_panel/#{data_source_id}/sid?c_id=#{customer_id}",
    success: (data) ->
      # Copy all the data received from the JSON response into the table.
      # We first remove all existing elements, and then rebuild the table from scratch.
      table = $('#data_source_indicators > table.table.table-striped')
      table.children().remove()

      for risk_group in data.risk_groups
        # Add the table-header
        table.append("
          <tr>
            <th></th>
            <th>Indicator</th>
            <th>Description</th>
            <th>Risk</th>
          </tr>"
        )

        # Add the indicators below that.
        for indicator in risk_group.indicators
          row = $('<tr></tr>')
          row.append("<td><input name=\"subscription[indicator_ids][]\" checked=\"checked\" type=\"checkbox\" value=\"#{indicator.id}\" /></td>")
          row.append("<td>#{indicator.name}</td>")
          row.append("<td>#{indicator.description}</td>")
          row.append("<td>#{indicator.risk_name}</td>")
          table.append(row)
    )
  )

  # The DataSource might also require credentials to be set.
  # Because we are unsure whether we need some Credentials for the DataSource or not
  # we need to look up the ID of the DataSource and fetch it's matching
  # CSS element.
  # But we hide all elements before that.
  $("#data_source_bisnode").css('display', 'none')
  $("#data_source_bureau_van_dijk").css('display', 'none')
  $("#data_source_creditsafe").css('display', 'none')
  $("#data_source_format").css('display', 'none')

  if data_source_name != null
    $("#data_source_#{data_source_name}").css('display', 'block')
)

For me this is hacky:

  • I need to hard-code URLs, meaning I cannot reuse the logic anywhere.
  • It’s bound to tight to the design of the pages.
  • It combines way too much data to the point separation becomes hard.

Solution

The solution for me becomes pretty clear at this point:

  1. Separate the credentials to their own encrypted entities
  2. Unify the interface for these Subscription objects to be generic
  3. Restructure the data so that each form knows that to display
  4. Get rid of the JavaScript.

Setting a TTL for Redis keys

Okay,

Based on the title, you’d think this was supposed to be something simple, right? Well, I can tell you that it completely depends on the gems you are using to get all your entries properly configured with a reasonable TTL, and not fall back on the defaults of one year and infinite.

::Redis::Semaphore

One of the first gems we use in our application is the ::Redis::Semaphore gem. We use this gem to create, as the name implies, semaphore objects in Redis to prevent the simultanious execution of background jobs in our Sidekiq framework.

The documentation of this gem makes mention of the :expiration key that can be set. They also warn that this might be dangerous because a key could expire during execution of the process. While this might be true, this luckily was a non-case in our situation.

Basically, when a key receives a TTL, the timer starts running as soon as the key is created in Redis. But the Redis documentation clearly states that the usage of GET, SET and GETSET refreshes the TTL on the key, and tests have shown this. So with the way we use the semaphore keys, our key should not expire during execution of the process. The snipper below shows how to use the setting when creating a semaphore:

::Redis::Semaphore.new("our key", redis: Connection.redis_client, expiration: 7.days.to_i)

Rails Cache

The second place where we use the Redis for caching purposes is the overal Rails cache. In order to make this happen, we relied on the gem redis-rails. This game makes it possible to use Redis as caching back-end for the Rails framework. The configuration is done in your application.rb and is pretty straightforward:

# Redis Cache Configuration.
# With the new TTL settings, keys are now automatically expired after 7 days.
config.cache_store = :redis_store, Chamber[:redis][:cache], { expires_in: 7.days.to_i }
config.session_store :redis_store, redis_server: Chamber[:redis][:cache], key: Chamber[:redis][:session_key], expires_in: 1.year

Exceptional cases

Of course, it wouldn’t be a real application if there weren’t any exceptions to using the standard configuration. There’s an edge case in our application where we do not wish to expire the data stored inside Redis, but how do we do that when the global cache has now been set to 7 days?

Easy, you simply remove the TTL in the caching call. When the TTL is explicitly set to nil, it will be translated to -1 by Redis, which means indefinitely:

Rails.cache.fetch(cache_key.to_param, expires_in: nil) { yield }

Geocoder

This is where is gets more interesting. The geocoder gem allows you to make calls to various endpoints to have an address transformed into geo-coordinates. They way we used this gem, was that we store the returned information in Redis and have it as a faster lookup when the address matches, reducing the load on the endpoints we used.

The gem itself does not supported setting a TTL on the keys it uses. Having it configured to store everything in Redis, we wrote a small wrapper that mimics the behaviour of the Geocoder and sets the desired TTL for us:

module Geocoder
  class AutoExpireRedisCache
    # Initializes the cache using the actual store and TTL as arguments.
    # By default, keys are expired after a week.
    def initialize(store, ttl = 7.days.to_i)
      @store = store
      @ttl = ttl
    end

    # Looks up the value using the provided URL as key.
    def [](url)
      @store.[](url)
    end

    # Store the provided value, using the URL as key.
    # The stored key is expired after the defined TTL.
    def []=(url, value)
      @store.[]=(url, value)
      @store.expire(url, @ttl)
    end

    # Returns all keys currently used by the store.
    def keys
      @store.keys
    end

    # Deletes the specified key from the store.
    def del(url)
      @store.del(url)
    end
  end
end

::Redis::Rack::Cache

This was the hardest gem to have it follow our TTL policy for Redis keys. The gem is basically a hook for Rack to store it’s cache data inside Redis. Unfortunately we do not use this as it should be used, in that way that we never call the cache_for method in our controllers. So throughout the entire application we rely on the default settings of this gem.

This gem has a constant called ::Rack::Cache::Redis::DEFAULT_TTL which is set to 1 year. A whole year is pretty long to keep cached pages and metadata for the application. Looking through the documentation of both Redis, Rack and this game, we came across an initial solution:

config.action_dispatch.rack_cache = {
  metastore: "#{Chamber[:redis][:cache]}/metastore",
  entitystore: "#{Chamber[:redis][:cache]}/entitystore",
  default_ttl: 7.days.to_i,
  use_native_ttl: true
}

Unfortunately this doesn’t work at all. This ends up with the weird behaviour that everything gets stored for over a year because of the constant defined by the framework. The gem offers no configuration options to overwrite this behaviour.

So what does every good Ruby programmer do?

# HACKY TIME!
# We redefine the constant of the Redis Rack Cache to be 7 days instead of one year.
# Since this gem doesn't like to be configured, Oli forces it to be configured using ruby sugar.
::Redis::Rack::Cache.send(:remove_const, :DEFAULT_TTL)
::Redis::Rack::Cache.send(:const_set, :DEFAULT_TTL, 7.days.to_i)

We injected this snippet before the configuration part in our application.rb, overwriting the constant with a setting that’s more to our liking. While it’s not the most elegant solution it does get the job done:

➜  ~ redis-cli
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> INFO KeySpace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
db1:keys=51,expires=51,avg_ttl=28443512775
127.0.0.1:6379[1]> TTL metastore:4453e4e41864af53bcce5e2c3dffe719a92374ae
(integer) 68400 # 7 days.

Of course this requires you to initially flush all keys when these settings are applied, but we achieved our desired goal : Keep the memory footprint of Redis in line.

Blog at WordPress.com.

Up ↑