123456789_123456789_123456789_123456789_123456789_

Class: Puma::Server

Relationships & Source Files
Super Chains via Extension / Inclusion / Inheritance
Class Chain:
self, Forwardable
Instance Chain:
self, Request, Const
Inherits: Object
Defined in: lib/puma/server.rb

Overview

The HTTP Server itself. Serves out a single Rack app.

This class is used by the Single and Cluster classes to generate one or more Server instances capable of handling requests. Each Puma process will contain one Server instance.

The Server instance pulls requests from the socket, adds them to a Reactor where they get eventually passed to a ThreadPool.

Each Server will have one reactor and one thread pool.

Constant Summary

Const - Included

BANNED_HEADER_KEY, CGI_VER, CHUNKED, CHUNK_SIZE, CLOSE, CLOSE_CHUNKED, CODE_NAME, COLON, CONNECTION_CLOSE, CONNECTION_KEEP_ALIVE, CONTENT_LENGTH, CONTENT_LENGTH2, CONTENT_LENGTH_S, CONTINUE, DQUOTE, EARLY_HINTS, ERROR_RESPONSE, FAST_TRACK_KA_TIMEOUT, FIRST_DATA_TIMEOUT, GATEWAY_INTERFACE, HALT_COMMAND, HEAD, HIJACK, HIJACK_IO, HIJACK_P, HTTP, HTTPS, HTTPS_KEY, HTTP_10_200, HTTP_11, HTTP_11_100, HTTP_11_200, HTTP_CONNECTION, HTTP_EXPECT, HTTP_HEADER_DELIMITER, HTTP_HOST, HTTP_VERSION, HTTP_X_FORWARDED_FOR, HTTP_X_FORWARDED_PROTO, HTTP_X_FORWARDED_SCHEME, HTTP_X_FORWARDED_SSL, ILLEGAL_HEADER_KEY_REGEX, ILLEGAL_HEADER_VALUE_REGEX, KEEP_ALIVE, LINE_END, LOCALHOST, LOCALHOST_IP, MAX_BODY, MAX_FAST_INLINE, MAX_HEADER, NEWLINE, PATH_INFO, PERSISTENT_TIMEOUT, PORT_443, PORT_80, PROXY_PROTOCOL_V1_REGEX, PUMA_CONFIG, PUMA_PEERCERT, PUMA_SERVER_STRING, PUMA_SOCKET, PUMA_TMP_BASE, PUMA_VERSION, QUERY_STRING, RACK_AFTER_REPLY, RACK_INPUT, RACK_URL_SCHEME, REMOTE_ADDR, REQUEST_METHOD, REQUEST_PATH, REQUEST_URI, RESTART_COMMAND, SERVER_NAME, SERVER_PORT, SERVER_PROTOCOL, SERVER_SOFTWARE, STOP_COMMAND, TRANSFER_ENCODING, TRANSFER_ENCODING2, TRANSFER_ENCODING_CHUNKED, WRITE_TIMEOUT

Class Attribute Summary

Class Method Summary

Instance Attribute Summary

Instance Method Summary

Request - Included

#default_server_port,
#handle_request

Takes the request contained in client, invokes the Rack application to construct the response and writes it back to client.io.

#normalize_env

Given a Hash env for the request read from client, add and fixup keys to comply with Rack’s env guidelines.

#fast_write

Writes to an io (normally Client#io) using #syswrite

#fetch_status_code, #illegal_header_key?, #illegal_header_value?,
#req_env_post_parse

Fixup any headers with ‘,` in the name to have _ now.

#str_early_hints

Used in the lambda for env[ Const::EARLY_HINTS ].

#str_headers

Processes and write headers to the IOBuffer.

Constructor Details

.new(app, events = Events.stdio, options = {}) ⇒ Server

Note:

Several instance variables exist so they are available for testing, and have default values set via fetch. Normally the values are set via ‘::Puma::Configuration.puma_default_options`.

Create a server for the rack app #app.

#events is an object which will be called when certain error events occur to be handled. See Events for the list of current methods to implement.

#run returns a thread that you can join on to wait for the server to do its work.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 74

def initialize(app, events=Events.stdio, options={})
  @app = app
  @events = events
  @log_writer = events

  @check, @notify = nil
  @status = :stop

  @auto_trim_time = 30
  @reaping_time = 1

  @thread = nil
  @thread_pool = nil

  @options = options

  @early_hints         = options.fetch :early_hints, nil
  @first_data_timeout  = options.fetch :first_data_timeout, FIRST_DATA_TIMEOUT
  @min_threads         = options.fetch :min_threads, 0
  @max_threads         = options.fetch :max_threads , (Puma.mri? ? 5 : 16)
  @persistent_timeout  = options.fetch :persistent_timeout, PERSISTENT_TIMEOUT
  @queue_requests      = options.fetch :queue_requests, true
  @max_fast_inline     = options.fetch :max_fast_inline, MAX_FAST_INLINE
  @io_selector_backend = options.fetch :io_selector_backend, :auto

  temp = !!(@options[:environment] =~ /\A(development|test)\z/)
  @leak_stack_on_error = @options[:environment] ? temp : true

  @binder = Binder.new(events)

  ENV['RACK_ENV'] ||= "development"

  @mode = :http

  @precheck_closing = true

  @requests_count = 0
end

Class Attribute Details

.closed_socket_supported?Boolean (readonly)

This method is for internal use only.

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 131

def closed_socket_supported?
  Socket.const_defined?(:TCP_INFO) && Socket.const_defined?(:IPPROTO_TCP)
end

.current (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 119

def current
  Thread.current[ThreadLocalKey]
end

.tcp_cork_supported?Boolean (readonly)

This method is for internal use only.

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 125

def tcp_cork_supported?
  Socket.const_defined?(:TCP_CORK) && Socket.const_defined?(:IPPROTO_TCP)
end

Instance Attribute Details

#app (rw)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 54

attr_accessor :app

#auto_trim_time (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#auto_trim_time=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#backlog (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 197

def backlog
  @thread_pool and @thread_pool.backlog
end

#binder (rw)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 55

attr_accessor :binder

#early_hints (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#early_hints=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#events (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 39

attr_reader :events

#first_data_timeout (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#first_data_timeout=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#leak_stack_on_error (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#leak_stack_on_error=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#log_writer (readonly)

to help with backports

[ GitHub ]

  
# File 'lib/puma/server.rb', line 42

attr_reader :log_writer                 # to help with backports

#max_threads (rw)

for #stats

[ GitHub ]

  
# File 'lib/puma/server.rb', line 40

attr_reader :min_threads, :max_threads  # for #stats

#max_threads=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#min_threads (rw)

for #stats

[ GitHub ]

  
# File 'lib/puma/server.rb', line 40

attr_reader :min_threads, :max_threads  # for #stats

#min_threads=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#persistent_timeout (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#persistent_timeout=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#pool_capacity (readonly)

This number represents the number of requests that the server is capable of taking right now.

For example if the number is 5 then it means there are 5 threads sitting idle ready to take a request. If one request comes in, then the value would be 4 until it finishes processing.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 215

def pool_capacity
  @thread_pool and @thread_pool.pool_capacity
end

#reaping_time (rw)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 45

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#reaping_time=(value) (rw)

Deprecated.

v6.0.0

[ GitHub ]

#requests_count (readonly)

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 41

attr_reader :requests_count             # @version 5.0.0

#running (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 202

def running
  @thread_pool and @thread_pool.spawned
end

#shutting_down?Boolean (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 614

def shutting_down?
  @status == :stop || @status == :restart
end

#stats (readonly)

Returns a hash of stats about the running server for reporting purposes.

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 625

def stats
  STAT_METHODS.map {|name| [name, send(name) || 0]}.to_h
end

#thread (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 38

attr_reader :thread

Instance Method Details

#begin_restart(sync = false)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 609

def begin_restart(sync=false)
  notify_safely(RESTART_COMMAND)
  @thread.join if @thread && sync
end

#client_error(e, client)

Handle various error types thrown by Client I/O operations.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 509

def client_error(e, client)
  # Swallow, do not log
  return if [ConnectionError, EOFError].include?(e.class)

  lowlevel_error(e, client.env)
  case e
  when MiniSSL::SSLError
    @events.ssl_error e, client.io
  when HttpParserError
    client.write_error(400)
    @events.parse_error e, client
  when HttpParserError501
    client.write_error(501)
    @events.parse_error e, client
  else
    client.write_error(500)
    @events.unknown_error e, nil, "Read"
  end
end

#closed_socket?(socket)

See additional method definition at line 174.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 191

def closed_socket?(socket)
  skt = socket.to_io
  return false unless skt.kind_of?(TCPSocket) && @precheck_closing

  begin
    tcp_info = skt.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_INFO)
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
    @precheck_closing = false
    false
  else
    state = tcp_info.unpack(UNPACK_TCP_STATE_FROM_TCP_INFO)[0]
    # TIME_WAIT: 6, CLOSE: 7, CLOSE_WAIT: 8, LAST_ACK: 9, CLOSING: 11
    (state >= 6 && state <= 9) || state == 11
  end
end

#cork_socket(socket)

6 == Socket::IPPROTO_TCP 3 == TCP_CORK 1/0 == turn on/off

See additional method definition at line 146.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 164

def cork_socket(socket)
  skt = socket.to_io
  begin
    skt.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_CORK, 1) if skt.kind_of? TCPSocket
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
  end
end

#graceful_shutdown

Wait for all outstanding requests to finish.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 552

def graceful_shutdown
  if @options[:shutdown_debug]
    threads = Thread.list
    total = threads.size

    pid = Process.pid

    $stdout.syswrite "#{pid}: === Begin thread backtrace dump ===\n"

    threads.each_with_index do |t,i|
      $stdout.syswrite "#{pid}: Thread #{i+1}/#{total}: #{t.inspect}\n"
      $stdout.syswrite "#{pid}: #{t.backtrace.join("\n#{pid}: ")}\n\n"
    end
    $stdout.syswrite "#{pid}: === End thread backtrace dump ===\n"
  end

  if @status != :restart
    @binder.close
  end

  if @thread_pool
    if timeout = @options[:force_shutdown_after]
      @thread_pool.shutdown timeout.to_f
    else
      @thread_pool.shutdown
    end
  end
end

#halt(sync = false)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 604

def halt(sync=false)
  notify_safely(HALT_COMMAND)
  @thread.join if @thread && sync
end

#handle_check

This method is for internal use only.
[ GitHub ]

  
# File 'lib/puma/server.rb', line 389

def handle_check
  cmd = @check.read(1)

  case cmd
  when STOP_COMMAND
    @status = :stop
    return true
  when HALT_COMMAND
    @status = :halt
    return true
  when RESTART_COMMAND
    @status = :restart
    return true
  end

  false
end

#handle_servers

[ GitHub ]

  
# File 'lib/puma/server.rb', line 312

def handle_servers
  begin
    check = @check
    sockets = [check] + @binder.ios
    pool = @thread_pool
    queue_requests = @queue_requests
    drain = @options[:drain_on_shutdown] ? 0 : nil

    addr_send_name, addr_value = case @options[:remote_address]
    when :value
      [:peerip=, @options[:remote_address_value]]
    when :header
      [:remote_addr_header=, @options[:remote_address_header]]
    when :proxy_protocol
      [:expect_proxy_proto=, @options[:remote_address_proxy_protocol]]
    else
      [nil, nil]
    end

    while @status == :run || (drain && shutting_down?)
      begin
        ios = IO.select sockets, nil, nil, (shutting_down? ? 0 : nil)
        break unless ios
        ios.first.each do |sock|
          if sock == check
            break if handle_check
          else
            pool.wait_until_not_full
            pool.wait_for_less_busy_worker(@options[:wait_for_less_busy_worker])

            io = begin
              sock.accept_nonblock
            rescue IO::WaitReadable
              next
            end
            drain += 1 if shutting_down?
            pool << Client.new(io, @binder.env(sock)).tap { |c|
              c.listener = sock
              c.send(addr_send_name, addr_value) if addr_value
            }
          end
        end
      rescue IOError, Errno::EBADF
        # In the case that any of the sockets are unexpectedly close.
        raise
      rescue StandardError => e
        @events.unknown_error e, nil, "Listen loop"
      end
    end

    @events.debug "Drained #{drain} additional connections." if drain
    @events.fire :state, @status

    if queue_requests
      @queue_requests = false
      @reactor.shutdown
    end
    graceful_shutdown if @status == :stop || @status == :restart
  rescue Exception => e
    @events.unknown_error e, nil, "Exception handling servers"
  ensure
    # RuntimeError is Ruby 2.2 issue, can't modify frozen IOError
    # Errno::EBADF is infrequently raised
    [@check, @notify].each do |io|
      begin
        io.close unless io.closed?
      rescue Errno::EBADF, RuntimeError
      end
    end
    @notify = nil
    @check = nil
  end

  @events.fire :state, :done
end

#inherit_binder(bind)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 113

def inherit_binder(bind)
  @binder = bind
end

#lowlevel_error(e, env, status = 500)

A fallback rack response if @app raises as exception.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 531

def lowlevel_error(e, env, status=500)
  if handler = @options[:lowlevel_error_handler]
    if handler.arity == 1
      return handler.call(e)
    elsif handler.arity == 2
      return handler.call(e, env)
    else
      return handler.call(e, env, status)
    end
  end

  if @leak_stack_on_error
    backtrace = e.backtrace.nil? ? '<no backtrace available>' : e.backtrace.join("\n")
    [status, {}, ["Puma caught this error: #{e.message} (#{e.class})\n#{backtrace}"]]
  else
    [status, {}, ["An unhandled lowlevel error occurred. The application logs may have details.\n"]]
  end
end

#notify_safely(message) (private)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 581

def notify_safely(message)
  @notify << message
rescue IOError, NoMethodError, Errno::EPIPE
  # The server, in another thread, is shutting down
  Puma::Util.purge_interrupt_queue
rescue RuntimeError => e
  # Temporary workaround for https://bugs.ruby-lang.org/issues/13239
  if e.message.include?('IOError')
    Puma::Util.purge_interrupt_queue
  else
    raise e
  end
end

#process_client(client, buffer)

Given a connection on client, handle the incoming requests, or queue the connection in the Reactor if no request is available.

This method is called from a ThreadPool worker thread.

This method supports HTTP Keep-Alive so it may, depending on if the client indicates that it supports keep alive, wait for another request before returning.

Return true if one or more requests were processed.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 417

def process_client(client, buffer)
  # Advertise this server into the thread
  Thread.current[ThreadLocalKey] = self

  clean_thread_locals = @options[:clean_thread_locals]
  close_socket = true

  requests = 0

  begin
    if @queue_requests &&
      !client.eagerly_finish

      client.set_timeout(@first_data_timeout)
      if @reactor.add client
        close_socket = false
        return false
      end
    end

    with_force_shutdown(client) do
      client.finish(@first_data_timeout)
    end

    while true
      @requests_count += 1
      case handle_request(client, buffer, requests + 1)
      when false
        break
      when :async
        close_socket = false
        break
      when true
        buffer.reset

        ThreadPool.clean_thread_locals if clean_thread_locals

        requests += 1

        # As an optimization, try to read the next request from the
        # socket for a short time before returning to the reactor.
        fast_check = @status == :run

        # Always pass the client back to the reactor after a reasonable
        # number of inline requests if there are other requests pending.
        fast_check = false if requests >= @max_fast_inline &&
          @thread_pool.backlog > 0

        next_request_ready = with_force_shutdown(client) do
          client.reset(fast_check)
        end

        unless next_request_ready
          break unless @queue_requests
          client.set_timeout @persistent_timeout
          if @reactor.add client
            close_socket = false
            break
          end
        end
      end
    end
    true
  rescue StandardError => e
    client_error(e, client)
    # The ensure tries to close client down
    requests > 0
  ensure
    buffer.reset

    begin
      client.close if close_socket
    rescue IOError, SystemCallError
      Puma::Util.purge_interrupt_queue
      # Already closed
    rescue StandardError => e
      @events.unknown_error e, nil, "Client"
    end
  end
end

#reactor_wakeup(client)

This method is called from the Reactor thread when a queued Client receives data, times out, or when the Reactor is shutting down.

It is responsible for ensuring that a request has been completely received before it starts to be processed by the ThreadPool. This may be known as read buffering. If read buffering is not done, and no other read buffering is performed (such as by an application server such as nginx) then the application would be subject to a slow client attack.

For a graphical representation of how the request buffer works see architecture.md.

The method checks to see if it has the full header and body with the Client#try_to_finish method. If the full request has been sent, then the request is passed to the ThreadPool (‘@thread_pool << client`) so that a “worker thread” can pick up the request and begin to execute application logic. The Client is then removed from the reactor (return true).

If a client object times out, a 408 response is written, its connection is closed, and the object is removed from the reactor (return true).

If the Reactor is shutting down, all Clients are either timed out or passed to the ThreadPool, depending on their current state (#can_close?).

Otherwise, if the full request is not ready then the client will remain in the reactor (return false). When the client sends more data to the socket the Client object will wake up and again be checked to see if it’s ready to be passed to the thread pool.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 296

def reactor_wakeup(client)
  shutdown = !@queue_requests
  if client.try_to_finish || (shutdown && !client.can_close?)
    @thread_pool << client
  elsif shutdown || client.timeout == 0
    client.timeout!
  else
    client.set_timeout(@first_data_timeout)
    false
  end
rescue StandardError => e
  client_error(e, client)
  client.close
  true
end

#run(background = true, thread_name: 'srv')

Runs the server.

If background is true (the default) then a thread is spun up in the background to handle requests. Otherwise requests are handled synchronously.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 225

def run(background=true, thread_name: 'srv')
  BasicSocket.do_not_reverse_lookup = true

  @events.fire :state, :booting

  @status = :run

  @thread_pool = ThreadPool.new(
    thread_name,
    @min_threads,
    @max_threads,
    ::Puma::IOBuffer,
    &method(:process_client)
  )

  @thread_pool.out_of_band_hook = @options[:out_of_band]
  @thread_pool.clean_thread_locals = @options[:clean_thread_locals]

  if @queue_requests
    @reactor = Reactor.new(@io_selector_backend, &method(:reactor_wakeup))
    @reactor.run
  end

  if @reaping_time
    @thread_pool.auto_reap!(@reaping_time)
  end

  if @auto_trim_time
    @thread_pool.auto_trim!(@auto_trim_time)
  end

  @check, @notify = Puma::Util.pipe unless @notify

  @events.fire :state, :running

  if background
    @thread = Thread.new do
      Puma.set_thread_name thread_name
      handle_servers
    end
    return @thread
  else
    handle_servers
  end
end

#stop(sync = false)

Stops the acceptor thread and then causes the worker threads to finish off the request queue before finally exiting.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 599

def stop(sync=false)
  notify_safely(STOP_COMMAND)
  @thread.join if @thread && sync
end

#uncork_socket(socket)

See additional method definition at line 155.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 167

def uncork_socket(socket)
  skt = socket.to_io
  begin
    skt.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_CORK, 0) if skt.kind_of? TCPSocket
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
  end
end

#with_force_shutdown(client, &block)

Triggers a client timeout if the thread-pool shuts down during execution of the provided block.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 500

def with_force_shutdown(client, &block)
  @thread_pool.with_force_shutdown(&block)
rescue ThreadPool::ForceShutdown
  client.timeout!
end