CloudI API Documentation

version 2.0.7
last updated on February 24th 2024

CloudI API - Making a Service

CloudI Service API - Controlling CloudI


CloudI API - Making a Service

1.0 - Introduction

The CloudI API provides a simple messaging API which allows CloudI services to send requests. The CloudI API supports both publish/subscribe and request/reply communication in an intuitive way. It is not necessary to understand the Erlang programming language (or Elixir), to use the CloudI API since a full CloudI API implementation is provided for every supported programming language (ATS, C/C++, Elixir, Erlang, Go, Haskell, Java, JavaScript, OCaml, Perl, PHP, Python, Ruby, and Rust, currently).

The CloudI API messaging is different from other messaging APIs and provides simpler integration for a few reasons:

The subscribe function subscribes to a service name pattern string which may contain "*" and "?" wildcard characters, to accept any matching service requests. Either "*" or "?" in a service name pattern will match one or more characters with "?" never matching the character that follows it ("?" is unable to be the last character, i.e., "/?/" matches "/a/" but never "/a/b/" while "/*/" will match either). The send_sync function and the send_async function provide point-to-point communication based on the service name provided. When multiple services subscribe with the same service name pattern the destination is picked based on the sending service's "destination refresh method", which can be any of the following:

Destination Refresh Method Meaning
lazy_closest (or)
immediate_closest
A service running on the local node will be selected, unless the destination only exists on a remote node
lazy_furthest (or)
immediate_furthest
A service running on a remote node will be selected, unless the destination only exists on the local node
lazy_random (or)
immediate_random
A service is selected randomly from the subscribed services
lazy_local (or)
immediate_local
Only a service on the local node is selected
lazy_remote (or)
immediate_remote
Only a service on a remote node is selected
lazy_newest (or)
immediate_newest
Only the most recently subscribed service is selected
lazy_oldest (or)
immediate_oldest
Only the first subscribed service is selected
none The service should never send a request and it is an error when the service attempts to send (the service may still receive requests)

The "lazy" prefix and the "immediate" prefix on the destination refresh method determines whether stale data is used within the service's data or if a single lookup process is used to get the most current destination result, respectively ("lazy" is for when long-lived services are the destination but consumes more service memory, and "immediate" is for when short-lived services are the destination but creates contention for the lookup process).

When separate service processes subscribe with the same service name pattern, each subscription is used based on random selection (if more than one service processes are available based on the destination refresh method), when a service request is sent to the service name. If the same service subscribes with the same service name pattern more than once within a single external service thread, each subscription is used in round-robin order (first subscription is called first, so order is preserved), when the service thread receives a request for the specific service name pattern.

The mcast_async function provides publish functionality by sending a request asynchronously to all services that have subscribed to the same service name pattern. To receive an asynchronous request recv_async is used with the "TransId" (i.e., Transaction Id, a v1 UUID) or a null UUID to receive the oldest service request.

The return function is used to respond to a service request and terminate the current request handler (i.e., the service request is finished, at that point). A service can return a null response if the sending service should not receive a response, which can be used for typical response-less publish functionality. The forward function provides a new destination for the same service request, delaying the request's completion, but still terminating the current request handler.

Top

1.1 - (initialization)

The service configuration will control the CloudI API initialization, which is done automatically, but does influence the source code. The service configuration defines the number of Operating System (OS) processes to create and the number of threads for an external service. For an internal service, the configuration defines the number of Erlang processes to create. A number specified as an integer in the configuration is the exact number of processes or threads. However, if the number is specified as a floating point number, it is used as a CPU count (i.e., Erlang scheduler count) multipler where >1.0 implies floor and <1.0 implies round. The external service APIs provide the thread_count function so that the total number of threads can be used for thread creation, with each thread holding an instance of the CloudI API (to avoid lock contention):

C
int cloudi_initialize_thread_count(unsigned int * const thread_count);
int cloudi_initialize(cloudi_instance_t * api,
                      unsigned int const thread_index,
                      void * state);
C++
unsigned int CloudI::API::thread_count();
CloudI::API(unsigned int const thread_index,
            bool const terminate_return_value = true);
Elixir
  def cloudi_service_init(_args, _prefix, _timeout, dispatcher) do
    # ...
    {:ok, :undefined}
  end
Erlang
cloudi_service_init(_Args, _Prefix, _Timeout, Dispatcher) ->
    % ...
    {ok, #state{}}.
Go
cloudi.ThreadCount() (uint32, error)
cloudi.API(threadIndex uint32, state interface{}) (*Instance, error)
Java
int org.cloudi.API.thread_count();
org.cloudi.API(final int thread_index);
JavaScript
CloudI.API.thread_count();
CloudI.API(thread_index, callback);
Perl
CloudI::API->thread_count();
CloudI::API->new($thread_index);
PHP
\CloudI\API::thread_count();
\CloudI\API($thread_index);
Python
cloudi.API.thread_count()
cloudi.API(thread_index)
Ruby
CloudI::API.thread_count()
CloudI::API.new(thread_index)
 
Initialize an instance of the CloudI API

An internal service uses the configured initialization timeout to limit the execution time spent within the cloudi_service_init/4 cloudi_service behaviour callback function. An external service uses the configured initialization timeout to limit the execution time spent between creating an instance of the CloudI API and calling the CloudI API poll function (for the first time). During the service initialization the CloudI API functions that may not be called are: send_sync, recv_async, return and forward (i.e., send_sync and recv_async block to receive a service request response but service initialization must be asynchronous and within the initialization timeout period, return and forward are only valid when completing the handling of a service request).

If the initialization timeout is exceeded a service failure has occurred and a restart will occur (if possible) based on the configured MaxR (maximum restarts) and MaxT (maximum time period in seconds) service configuration values. If the service was configured in the CloudI configuration file (i.e., /usr/local/etc/cloudi/cloudi.conf, used when CloudI is first started) and one of the configured services fails initialization MaxR times, CloudI will shutdown to prevent erroneous operation. To avoid the fail-fast handling of the CloudI configuration file, the CloudI Service API services_add function may be used to provide service configuration (the return value will provide information about service initialization failures exceeding MaxR).

The service configuration also allows Access Control Lists (ACLs) to define explicit service name patterns for allowing or denying service destinations when the service sends a service request. The ACLs along with the destination refresh method determine how service requests are sent while other service options can tweak default settings.

External (non-Erlang) services are provided both the command line and the environmental variables specified within the service configuration. External sevice configuration uses the full path to the executable while internal services use the module name (and the OTP application name) within the code search paths. All environmental variables set in the shell executing the Erlang VM can be used within the executable path, arguments and environment set in the configuration of an external service, using standard shell syntax (e.g., "${USER}" or "$USER", where "\\$" is a literal "$" character).

Please see the CloudI Service API (services_add) for more details about service configuration.

Specific Language Integration Notes:

The Elixir/Erlang CloudI API functions shown below accept the most function parameters in cloudi_service but functions with less parameters do exist and they utilize default values for timeouts and request priority. Both the Timeout parameter and the Priority parameter accept the 'undefined' atom to assign the default configured value. Please see the cloudi_service module to see all the available functions and the behavior interface functions that are implemented within an Erlang service. The cloudi_service module is used within CloudI services, however, it is also possible to use CloudI services from external Erlang processes with a subset of the CloudI API functions in the cloudi module.

Both the C and the C++ CloudI API rely on the same underlying code, with the C++ API object as a wrapper around the C API pointer, so there should be no large performance difference. STL is avoided, to avoid the libstdc++ memory pool and internal memory pools are used. The C++ CloudI API functions below use the STRING type to represent either char const * const (or) std::string const &, since both are supported with overloaded functions.

The Java CloudI API doesn't have any C or C++ integration. It only uses reflection to utilize the low-level file descriptor object and store object function pointers.

The python CloudI API is provided as both the "cloudi" module and the "cloudi_c" module. The "cloudi_c" module uses the C++ CloudI API for more efficiency, while the "cloudi" module only uses Python source code.

Top

1.2 - (termination)

An internal service uses the termination timeout to limit the execution time spent within the cloudi_service_terminate/3 cloudi_service behaviour callback function. An external service uses the termination timeout to limit the execution time spent between returning from the CloudI API poll function and the service OS process exit. The termination timout is slightly less than MaxT (maximum time period in seconds) divided by MaxR (maximum restarts) to ensure service failures are finite (MaxR and MaxT are both service configuration values). During termination no CloudI API functions may be called. The termination execution time is used to cleanup the service's state (e.g., close connections or files).

Top

1.3 - subscribe

C
typedef void (*cloudi_callback_t)(int const request_type,
                                  char const * const name,
                                  char const * const pattern,
                                  void const * const request_info,
                                  uint32_t const request_info_size,
                                  void const * const request,
                                  uint32_t const request_size,
                                  uint32_t timeout,
                                  int8_t priority,
                                  char const * const trans_id,
                                  char const * const source,
                                  uint32_t const source_size,
                                  void * state,
                                  cloudi_instance_t * api);
int cloudi_subscribe(cloudi_instance_t * api,
                     char const * const pattern,
                     cloudi_callback_t f);
C++
template <typename T>
int CloudI::API::subscribe(STRING pattern,
                           T & object,
                           void (T::*f) (CloudI::API const &,
                                         int const,
                                         STRING,
                                         STRING,
                                         void const * const,
                                         uint32_t const,
                                         void const * const,
                                         uint32_t const,
                                         uint32_t,
                                         int8_t,
                                         char const * const,
                                         char const * const,
                                         uint32_t const)) const;
int CloudI::API::subscribe(STRING pattern,
                           void (*f) (API const &,
                                      int const,
                                      STRING,
                                      STRING,
                                      void const * const,
                                      uint32_t const,
                                      void const * const,
                                      uint32_t const,
                                      uint32_t,
                                      int8_t,
                                      char const * const,
                                      char const * const,
                                      uint32_t const)) const
Elixir
:cloudi_service.subscribe(dispatcher, pattern)
Erlang
cloudi_service:subscribe(Dispatcher :: pid(), Pattern :: string()) ->
    ok.
Go
func (api *cloudi.Instance) Subscribe(pattern string,
                                      function cloudi.Callback) error
Java
void org.cloudi.API.subscribe(final String pattern,
                              final Object instance,
                              final String methodName);
// with Java ≥ 8 a method reference can be used
void org.cloudi.API.subscribe(final String pattern,
                              final FunctionInterface9 callback);
JavaScript
var callback = function () {};
CloudI.API.subscribe(pattern, object, object_function, callback);
Perl
CloudI::API->subscribe($pattern, $function);
CloudI::API->subscribe($pattern, $object, $method);
PHP
\CloudI\API::subscribe($pattern, $object, $method);
Python
cloudi.API.subscribe(pattern, function)
Ruby
CloudI::API.subscribe(pattern, function)
 
Subscribe to a service name pattern

Subscribes with a service name pattern which provides a destination for other services to send to. The subscribing service will receive a service request, if a different service sends a service request with a service name that matches the service name pattern. The service name pattern is a string that may contain "*" and "?" wildcard characters, to accept any matching service requests. Either "*" or "?" in a service name pattern will match one or more characters with "?" never matching the character that follows it ("?" is unable to be the last character, i.e., "/?/" matches "/a/" but never "/a/b/" while "/*/" will match either). The service names and service name patterns are expected to be in a filepath format (e.g., "/root/directory/file.extension") by some provided CloudI services, though nothing enforces this convention. Good design dictates that service names operate within a given scope. Both the service names and the service name patterns should represent an appropriate scope, which the service manages (i.e., the same concept as a Uniform Resource Identifier (URI)).

When a service subscribes to a service name pattern, the supplied pattern string is appended to the service name prefix from the service's configuration, to provide the full service name pattern. The prefix provided within the service's configuration declares the scope of all service operations, as they are seen from other running services. Multiple subscribe function calls can increase the probability of receiving a service request when other services are subscribed with the same service name pattern.

Top

1.4 - subscribe_count

C
int cloudi_subscribe_count(cloudi_instance_t * api,
                           char const * const pattern);
// cloudi_get_subscribe_count(p) to get the result
C++
int CloudI::API::subscribe_count(STRING pattern) const;
// CloudI::API::get_subscribe_count() to get the result
Elixir
:cloudi_service.subscribe_count(dispatcher, pattern)
Erlang
cloudi_service:subscribe_count(Dispatcher :: pid(),
                               Pattern :: string()) ->
    non_neg_integer().
Go
func (api *cloudi.Instance) SubscribeCount(pattern string)
                                          (uint32, error)
Java
int org.cloudi.API.subscribe_count(final String pattern);
// return value is result
JavaScript
var callback = function (count) {};
CloudI.API.subscribe_count(pattern, callback);
Perl
CloudI::API->subscribe_count($pattern);
PHP
\CloudI\API::subscribe_count($pattern);
Python
cloudi.API.subscribe_count(pattern)
# return value is result
Ruby
CloudI::API.subscribe_count(pattern)
# return value is result
 
Determine how many subscriptions has occurred with a service name pattern

Provide a count of how many times a subscription has occurred for a specific service name pattern with the current service process. Often the result is either 0 or 1, but it is possible to subscribe any number of times to change the probability of the service process getting a service request, when many service processes are subscribing with the same service name. This function will always check the authoritative service process registry, so the result is always correct at that point in time (i.e., a lazy destination refresh method will not affect the subscribe_count result).

subscribe_count would be most common during testing and first learning about CloudI, but the function may also be used if the subscribe function usage is complex and not tracked separately (i.e., tracked within the service process' internal state).

Top

1.5 - unsubscribe

C
int cloudi_unsubscribe(cloudi_instance_t * api,
                       char const * const pattern);
C++
int CloudI::API::unsubscribe(STRING pattern) const;
Elixir
:cloudi_service.unsubscribe(dispatcher, pattern)
Erlang
cloudi_service:unsubscribe(Dispatcher :: pid(), Pattern :: string()) ->
    ok.
Go
func (api *cloudi.Instance) Unsubscribe(pattern string) error
Java
void org.cloudi.API.unsubscribe(final String pattern);
JavaScript
var callback = function () {};
CloudI.API.unsubscribe(pattern, callback);
Perl
CloudI::API->unsubscribe($pattern);
PHP
\CloudI\API::unsubscribe($pattern);
Python
cloudi.API.unsubscribe(pattern)
Ruby
CloudI::API.unsubscribe(pattern)
 
Unsubscribe from a service name pattern

Unsubscribe will remove the service's subscription for the specific service name pattern. If a service has subscribed with the same service name pattern multiple times, the unsubscribe will only remove one subscription instance. The subscription instance which is removed is whatever subscription would have been called next, for a matching service request.

Top

1.6 - get_pid (internal services only)

Elixir
:cloudi_service.get_pid(dispatcher, name, timeout)
Erlang
cloudi_service:get_pid(Dispatcher :: pid(),
                       Name :: string(),
                       Timeout :: non_neg_integer() | 'undefined' |
                                  'limit_min' | 'limit_max') ->
    {'ok', PatternPid :: {string(), pid()}} |
    {'error', Reason :: atom()}.
 
Get a service process identifier for a service whose service name pattern subscription matches the provided service name

Internal (Elixir/Erlang-only) services can request an Erlang process based on the service name provided, before calling either the send_sync function or the send_async function. The get_pid function should rarely be necessary, but it can allow other logic to be used for determining which service should receive a request (e.g., based on apparent processing power, like within the hexpi test). The Erlang PatternPid tuple returned could become invalid if the service destination terminated, so the Erlang process monitoring becomes the burden of the get_pid function user. Due to the intimate nature of this function, it only exists within the Elixir/Erlang CloudI API (to implement it in other languages would cause service destination inconsistencies due to the function delay and the potential storage before the destination is used).

The get_pid function provides a way to split the service name lookup latency from the service request latency so that two separate timeout values can be used, instead of a single timeout.

Top

1.7 - get_pids (internal services only)

Elixir
:cloudi_service.get_pids(dispatcher, name, timeout)
Erlang
cloudi_service:get_pids(Dispatcher :: pid(),
                        Name :: string(),
                        Timeout :: non_neg_integer() | 'undefined' |
                                   'limit_min' | 'limit_max') ->
    {'ok', PatternPids :: nonempty_list({string(), pid()})} |
    {'error', Reason :: atom()}.
 
Get all service process identifiers for services whose service name pattern subscriptions match the provided service name

Internal (Elixir/Erlang-only) services can request a list of Erlang processes based on the service name provided, before calling either the send_sync function or the send_async function. If all Erlang processes returned need to be used with send_async, it is easier to use the mcast_async function. The get_pids function should rarely be necessary, but it can allow other logic to be used for determining which service should receive a request (e.g., based on apparent processing power, like within the hexpi test). The Erlang PatternPids tuple list returned could contain invalid Erlang processes if the service destination terminated, so the Erlang process monitoring becomes the burden of the get_pids function user. Due to the intimate nature of this function, it only exists within the Elixir/Erlang CloudI API (to implement it in other languages would cause service destination inconsistencies due to the function delay and the potential storage before the destination is used).

The get_pids function provides a way to split the service name lookup latency from the service request latency so that two separate timeout values can be used, instead of a single timeout (e.g., with mcast_async).

Top

1.8 - send_sync

C
int cloudi_send_sync_(cloudi_instance_t * api,
                      char const * const name,
                      void const * const request_info,
                      uint32_t const request_info_size,
                      void const * const request,
                      uint32_t const request_size,
                      uint32_t timeout,
                      int8_t const priority);
C++
int CloudI::API::send_sync(STRING name,
                           void const * const request_info,
                           uint32_t const request_info_size,
                           void const * const request,
                           uint32_t const request_size,
                           uint32_t timeout,
                           int8_t const priority) const;
Elixir
:cloudi_service.send_sync(dispatcher, name,
                          request_info, request, timeout, priority)
:cloudi_service.send_sync(dispatcher, name,
                          request_info, request, timeout, priority,
                          pattern_pid)
Erlang
cloudi_service:send_sync(Dispatcher :: pid(),
                         Name :: string(),
                         RequestInfo :: any(),
                         Request :: any(),
                         Timeout :: non_neg_integer() | 'undefined' |
                                    'limit_min' | 'limit_max',
                         Priority :: integer() | 'undefined') ->
    {'ok', ResponseInfo :: any(), Response :: any()} |
    {'ok', Response :: any()} |
    {'error', Reason :: atom()}.
cloudi_service:send_sync(Dispatcher :: pid(),
                         Name :: string(),
                         RequestInfo :: any(),
                         Request :: any(),
                         Timeout :: non_neg_integer() | 'undefined' |
                                    'limit_min' | 'limit_max',
                         Priority :: integer() | 'undefined',
                         PatternPid :: {string(), pid()}) ->
    {'ok', ResponseInfo :: any(), Response :: any()} |
    {'ok', Response :: any()} |
    {'error', Reason :: atom()}.
Go
func (api *cloudi.Instance) SendSync(name string,
                                     requestInfo, request []byte,
                                     timeoutPriority ...interface{})
                                    ([]byte, []byte, []byte, error)
Java
Response org.cloudi.API.send_sync(String name, byte[] request_info,
                                  byte[] request, Integer timeout,
                                  Byte priority);
JavaScript
CloudI.API.send_sync(name, request, callback,
                     timeout, request_info, priority);
Perl
CloudI::API->send_sync($name, $request,
                       $timeout, $request_info, $priority);
PHP
\CloudI\API::send_sync($name, $request,
                       $timeout = null, $request_info = null,
                       $priority = null);
Python
cloudi.API.send_sync(name, request,
                     timeout=None, request_info=None, priority=None)
Ruby
CloudI::API.send_sync(name, request,
                      timeout=nil, request_info=nil, priority=nil)
 
Send a synchronous service request to a single service process whose service name pattern subscription matches the provided service name

Send a synchronous request to a service name with a specific timeout and a specific priority. If a timeout is not provided, the default synchronous timeout from the service configuration is used. If a priority is not provided, the default priority from the service configuration options is used (normally the default priority is 0).

C

Separate functions are provided to get the function result after a successful send_sync function call (an integer 0 return value).

cloudi_get_response(p)
cloudi_get_response_size(p)
cloudi_get_response_info(p)
cloudi_get_response_info_size(p)
cloudi_get_trans_id_count(p)
cloudi_get_trans_id(p, i)
C++

Separate functions are provided to get the function result after a successful send_sync function call (an integer 0 return value).

char const * CloudI::API::get_response() const;
uint32_t CloudI::API::get_response_size() const;
char const * CloudI::API::get_response_info() const;
uint32_t CloudI::API::get_response_info_size() const;
uint32_t CloudI::API::get_trans_id_count() const;
char const * CloudI::API::get_trans_id(unsigned int const i = 0) const;
Elixir

response_info is only returned if it does not equal "". response is only returned if it does not equal "".

{:ok, response_info, response}
{:ok, response}
{:error, reason}
Erlang

ResponseInfo is only returned if it does not equal <<>>. Response is only returned if it does not equal <<>>.

{'ok', ResponseInfo :: any(), Response :: any()}
{'ok', Response :: any()}
{'error', Reason :: atom()}
Go

responseInfo is []byte (nil if an error occurred).
response is []byte (nil if an error occurred).
transId is []byte (nil if an error occurred).
err is error (nil if successful).

(responseInfo, response, transId, err)
Java

A class encapsulates the function result.

org.cloudi.API.Response
JavaScript

The callback provides the function result.

callback(response_info, response, trans_id);
Perl

An array provides the function result.

($response_info, $response, $trans_id)
PHP

An array provides the function result.

array($response_info, $response, $trans_id)
Python

A tuple provides the function result.

(response_info, response, trans_id)
Ruby

An array provides the function result.

[response_info, response, trans_id]
 
send_sync return value

The send_sync response data is provided in ways typical to each programming language, as shown above. The non-Erlang send_sync functions provide the TransId of the request because the calling service may need to use the v1 UUID to manipulate and/or store the response.

Top

1.9 - send_async

C
int cloudi_send_async_(cloudi_instance_t * api,
                       char const * const name,
                       void const * const request_info,
                       uint32_t const request_info_size,
                       void const * const request,
                       uint32_t const request_size,
                       uint32_t timeout,
                       int8_t const priority);
C++
int CloudI::API::send_async(STRING name,
                            void const * const request_info,
                            uint32_t const request_info_size,
                            void const * const request,
                            uint32_t const request_size,
                            uint32_t timeout,
                            int8_t const priority) const;
Elixir
:cloudi_service.send_async(dispatcher, name,
                           request_info, request, timeout, priority)
:cloudi_service.send_async(dispatcher, name,
                           request_info, request, timeout, priority,
                           pattern_pid)
Erlang
cloudi_service:send_async(Dispatcher :: pid(),
                          Name :: string(),
                          RequestInfo :: any(),
                          Request :: any(),
                          Timeout :: non_neg_integer() | 'undefined' |
                                    'limit_min' | 'limit_max',
                          Priority :: integer() | 'undefined') ->
    {'ok', TransId :: <<_:128>>} |
    {'error', Reason :: atom()}.
cloudi_service:send_async(Dispatcher :: pid(),
                          Name :: string(),
                          RequestInfo :: any(),
                          Request :: any(),
                          Timeout :: non_neg_integer() | 'undefined' |
                                    'limit_min' | 'limit_max',
                          Priority :: integer() | 'undefined',
                          PatternPid :: {string(), pid()}) ->
    {'ok', TransId :: <<_:128>>} |
    {'error', Reason :: atom()}.
Go
func (api *cloudi.Instance) SendAsync(name string,
                                      requestInfo, request []byte,
                                      timeoutPriority ...interface{})
                                     ([]byte, error)
Java
TransId org.cloudi.API.send_async(String name, byte[] request_info,
                                  byte[] request, Integer timeout,
                                  Byte priority);
JavaScript
CloudI.API.send_async(name, request, callback,
                      timeout, request_info, priority);
Perl
CloudI::API->send_async($name, $request,
                        $timeout, $request_info, $priority);
PHP
\CloudI\API::send_async($name, $request,
                        $timeout = null, $request_info = null,
                        $priority = null);
Python
cloudi.API.send_async(name, request,
                      timeout=None, request_info=None, priority=None)
Ruby
CloudI::API.send_async(name, request,
                       timeout=nil, request_info=nil, priority=nil)
 
Send an asynchronous service request to a single service process whose service name pattern subscription matches the provided service name

Send an asynchronous request to a service name with a specific timeout and a specific priority. If a timeout is not provided, the default asynchronous timeout from the service configuration is used. If a priority is not provided, the default priority from the service configuration options is used (normally the default priority is 0).

An asynchronous send will block until a live service matches the service name destination or the timeout expires (when the service configuration option request_name_lookup is set to 'sync', the default, if an asynchronous lookup is required set request_name_lookup to 'async'). Once the asynchronous request is sent the TransId which identifies the request is returned.

C

Separate functions are provided to get the function result after a successful send_async function call (an integer 0 return value).

cloudi_get_trans_id_count(p)
cloudi_get_trans_id(p, i)
C++

Separate functions are provided to get the function result after a successful send_async function call (an integer 0 return value).

uint32_t CloudI::API::get_trans_id_count() const;
char const * CloudI::API::get_trans_id(unsigned int const i = 0) const;
Elixir
{:ok, trans_id}
{:error, reason}
Erlang
{'ok', TransId :: <<_:128>>}
{'error', Reason :: atom()}
Go

transId is []byte (nil if an error occurred).
err is error (nil if successful).

(transId, err)
Java

A class encapsulates the function result.

org.cloudi.API.TransId
JavaScript

The callback provides the function result, the trans_id as a string of 16 bytes.

callback(trans_id);
Perl

The trans_id is a string of 16 bytes.

$trans_id
PHP

The trans_id is a string of 16 bytes.

$trans_id
Python

The trans_id is a string of 16 bytes.

trans_id
Ruby

The trans_id is a string of 16 bytes.

trans_id
 
send_async return value

The send_async result is provided in ways typical to each programming language, as shown above. A TransId is a v1 UUID.

Top

1.10 - send_async_active (internal services only)

Elixir
:cloudi_service.send_async_active(dispatcher, name,
                                  request_info, request,
                                  timeout, priority)
:cloudi_service.send_async_active(dispatcher, name,
                                  request_info, request,
                                  timeout, priority,
                                  pattern_pid)
Erlang
cloudi_service:send_async_active(Dispatcher :: pid(),
                                 Name :: string(),
                                 RequestInfo :: any(),
                                 Request :: any(),
                                 Timeout :: non_neg_integer() |
                                            'undefined' |
                                            'limit_min' | 'limit_max',
                                 Priority :: integer() | 'undefined') ->
    {'ok', TransId :: <<_:128>>} |
    {'error', atom()}.
cloudi_service:send_async_active(Dispatcher :: pid(),
                                 Name :: string(),
                                 RequestInfo :: any(),
                                 Request :: any(),
                                 Timeout :: non_neg_integer() |
                                            'undefined' |
                                            'limit_min' | 'limit_max',
                                 Priority :: integer() | 'undefined',
                                 PatternPid :: {string(), pid()}) ->
    {'ok', TransId :: <<_:128>>} |
    {'error', atom()}.
 
Send an asynchronous service request to a single service process whose service name pattern subscription matches the provided service name with the result as an Erlang message

The send_async_active function provides the same functionality as the send_async function within an Erlang process, but the response is automatically sent to the Erlang process, after completion. Using send_async_active is the preferred way to send an asynchronous service request in Erlang because it utilizes Erlang's concurrency without requiring a blocking operation (a passive send, using Erlang vernacular, since it would otherwise require a call of the function recv_async to receive the request). The send_async_active function is not implemented in other languages because of their lack of native event handling.

Elixir
{:return_async_active, name, pattern,
 response_info, response, timeout, trans_id}
{:timeout_async_active, trans_id}
Erlang
{'return_async_active', Name :: string(), Pattern :: string(),
 ResponseInfo :: any(), Response :: any(),
 Timeout :: non_neg_integer(), TransId :: <<_:128>>}
{'timeout_async_active', TransId :: <<_:128>>}
 
send_async_active incoming Erlang process message

The send_async_active message is sent to the Erlang process as an Erlang message, so it arrives in the cloudi_service_handle_info function of the Erlang service module (i.e., the module that implements the cloudi_service behavior). The message formats are also provided as records that are accessible with:

-include_lib("cloudi_core/include/cloudi_service.hrl").
Top

1.11 - mcast_async

C
int cloudi_mcast_async_(cloudi_instance_t * api,
                        char const * const name,
                        void const * const request_info,
                        uint32_t const request_info_size,
                        void const * const request,
                        uint32_t const request_size,
                        uint32_t timeout,
                        int8_t const priority);
C++
int CloudI::API::mcast_async(STRING name,
                             void const * const request_info,
                             uint32_t const request_info_size,
                             void const * const request,
                             uint32_t const request_size,
                             uint32_t timeout,
                             int8_t const priority) const;
Elixir
:cloudi_service.mcast_async(dispatcher, name,
                            request_info, request, timeout, priority)
Erlang
cloudi_service:mcast_async(Dispatcher :: pid(),
                           Name :: string(),
                           RequestInfo :: any(),
                           Request :: any(),
                           Timeout :: non_neg_integer() | 'undefined' |
                                      'limit_min' | 'limit_max',
                           Priority :: integer() | 'undefined') ->
    {'ok', TransIdList :: list(<<_:128>>)} |
    {'error', Reason :: atom()}.
Go
func (api *cloudi.Instance) McastAsync(name string,
                                       requestInfo, request []byte,
                                       timeoutPriority ...interface{})
                                      ([][]byte, error)
Java
List<TransId> org.cloudi.API.mcast_async(String name,
                                         byte[] request_info,
                                         byte[] request,
                                         Integer timeout,
                                         Byte priority);
JavaScript
CloudI.API.mcast_async(name, request, callback,
                       timeout, request_info, priority);
Perl
CloudI::API->mcast_async($name, $request,
                         $timeout, $request_info, $priority);
PHP
\CloudI\API::mcast_async($name, $request,
                         $timeout = null, $request_info = null,
                         $priority = null);
Python
cloudi.API.mcast_async(name, request,
                       timeout=None, request_info=None, priority=None)
Ruby
CloudI::API.mcast_async(name, request,
                        timeout=nil, request_info=nil, priority=nil)
 
Send an asynchronous service request to all service processes whose service name pattern subscriptions match the provided service name

Multicast asynchronously, which is the same as publish, except that it is possible to respond to the service request. The function mcast_async will send the service request asynchronously to all services that have subscribed to a service name pattern that matches the service name destination. The mcast_async function will block until at least a single request has been sent or the timeout has expired (when the service configuration option request_name_lookup is set to 'sync', the default, if an asynchronous lookup is required set request_name_lookup to 'async'). The result of the function call is a list of TransIds (one TransId per service request). If a publish request is required, the destination service should have a null response, so that the service request response is ignored.

C

Separate functions are provided to get the function result after a successful send_async function call (an integer 0 return value).

cloudi_get_trans_id_count(p)
cloudi_get_trans_id(p, i)
C++

Separate functions are provided to get the function result after a successful send_async function call (an integer 0 return value).

uint32_t CloudI::API::get_trans_id_count() const;
char const * CloudI::API::get_trans_id(unsigned int const i = 0) const;
Elixir
{:ok, trans_id_list}
{:error, reason}
Erlang
{'ok', TransIdList :: list(<<_:128>>)}
{'error', Reason :: atom()}
Go

transIds is [][]byte (nil if an error occurred).
err is error (nil if successful).

(transIds, err)
Java

A class encapsulates the function result.

List<org.cloudi.API.TransId>
JavaScript

The callback provides the function result, an array of trans_id strings (each trans_id is a string of 16 bytes).

callback(trans_ids);
Perl

An array of trans_id strings (each trans_id is a string of 16 bytes).

@trans_ids
PHP

An array of trans_id strings (each trans_id is a string of 16 bytes).

$trans_ids
Python

An array of trans_id strings (each trans_id is a string of 16 bytes).

[trans_id]
Ruby

An array of trans_id strings (each trans_id is a string of 16 bytes).

[trans_id]
 
mcast_async return value

The mcast_async result is provided in ways typical to each programming language, as shown above. A TransId is a v1 UUID.

Top

1.12 - mcast_async_active (internal services only)

Elixir
:cloudi_service.mcast_async_active(dispatcher, name,
                                   request_info, request,
                                   timeout, priority)
Erlang
cloudi_service:mcast_async_active(Dispatcher :: pid(),
                                  Name :: string(),
                                  RequestInfo :: any(),
                                  Request :: any(),
                                  Timeout :: non_neg_integer() |
                                             'undefined' |
                                             'limit_min' | 'limit_max',
                                  Priority :: integer() |
                                              'undefined') ->
    {'ok', TransIdList :: list(<<_:128>>)} |
    {'error', Reason :: atom()}.
 
Send an asynchronous service request to all service processes whose service name pattern subscriptions match the provided service name with the result as an Erlang message

The mcast_async_active function provides the same functionality as the mcast_async function within an Erlang process, but the response is automatically sent to the Erlang process, after completion. Using mcast_async_active is the preferred way to publish an asynchronous service request in Erlang because it utilizes Erlang's concurrency without requiring a blocking operation (a passive send, using Erlang vernacular, since it would otherwise require a call of the function recv_async to receive the request). The mcast_async_active function is not implemented in other languages because of their lack of native event handling.

Elixir
{:return_async_active, name, pattern,
 response_info, response, timeout, trans_id}
{:timeout_async_active, trans_id}
Erlang
{'return_async_active', Name :: string(), Pattern :: string(),
 ResponseInfo :: any(), Response :: any(),
 Timeout :: non_neg_integer(), TransId :: <<_:128>>}
{'timeout_async_active', TransId :: <<_:128>>}
 
mcast_async_active incoming Erlang process message (the same as send_async_active messages)

The mcast_async_active message is sent to the Erlang process as an Erlang message, so it arrives in the cloudi_service_handle_info function of the Erlang service module (i.e., the module that implements the cloudi_service behavior). The message formats are also provided as records that are accessible with:

-include_lib("cloudi_core/include/cloudi_service.hrl").
Top

1.13 - recv_async

C
int cloudi_recv_async(cloudi_instance_t * api,
                      uint32_t timeout,
                      char const * const trans_id,
                      int consume);
C++
int CloudI::API::recv_async(uint32_t timeout,
                            STRING trans_id,
                            bool consume) const;
Elixir
:cloudi_service.recv_async(dispatcher, timeout, trans_id, consume)
Erlang
cloudi_service:recv_async(Dispatcher :: pid(),
                          Timeout :: non_neg_integer() | 'undefined' |
                                     'limit_min' | 'limit_max',
                          TransId :: <<_:128>>,
                          Consume :: boolean()) ->
    {'ok', ResponseInfo :: any(), Response :: any(),
           TransId :: <<_:128>>} |
    {'error', Reason :: atom()}.
Go
func (api *cloudi.Instance) RecvAsync(extra ...interface{})
                                     ([]byte, []byte, []byte, error)
Java
Response org.cloudi.API.recv_async(Integer timeout, byte[] transId,
                                   boolean consume);
JavaScript
CloudI.API.recv_async(callback,
                      timeout, trans_id, consume);
Perl
CloudI::API->recv_async($timeout, $trans_id, $consume);
PHP
\CloudI\API::recv_async($timeout = null, $trans_id = null,
                        $consume = true);
Python
cloudi.API.recv_async(timeout=None, trans_id=None, consume=True)
Ruby
CloudI::API.recv_async(timeout=nil, trans_id=nil, consume=true)
 
Receive an asynchronous service request response

Receive an asynchronous service request's response. If a TransId is not provided, a null UUID is used to request the oldest response that has not timed out. By default, the recv_async function will consume the service request so it is not accessible with the same function call in the future. The TransId of the service request is always returned for any external use or tracking of the request or response.

C

Separate functions are provided to get the function result after a successful recv_async function call (an integer 0 return value).

cloudi_get_response(p)
cloudi_get_response_size(p)
cloudi_get_response_info(p)
cloudi_get_response_info_size(p)
cloudi_get_trans_id_count(p)
cloudi_get_trans_id(p, i)
C++

Separate functions are provided to get the function result after a successful recv_async function call (an integer 0 return value).

char const * CloudI::API::get_response() const;
uint32_t CloudI::API::get_response_size() const;
char const * CloudI::API::get_response_info() const;
uint32_t CloudI::API::get_response_info_size() const;
uint32_t CloudI::API::get_trans_id_count() const;
char const * CloudI::API::get_trans_id(unsigned int const i = 0) const;
Elixir

response_info and response are only returned if both do not not equal "".

{:ok, response_info, response, trans_id}
{:error, reason}
Erlang

ResponseInfo and Response are only returned if both do not not equal <<>>.

{'ok', ResponseInfo :: any(), Response :: any(),
       TransId :: <<_:128>>}
{'error', Reason :: atom()}
Go

responseInfo is []byte (nil if an error occurred).
response is []byte (nil if an error occurred).
transId is []byte (nil if an error occurred).
err is error (nil if successful).

(responseInfo, response, transId, err)
Java

A class encapsulates the function result.

org.cloudi.API.Response
JavaScript

The callback provides the function result.

callback(response_info, response, trans_id);
Perl

An array provides the function result.

($response_info, $response, $trans_id)
PHP

An array provides the function result.

array($response_info, $response, $trans_id)
Python

A tuple provides the function result.

(response_info, response, trans_id)
Ruby

An array provides the function result.

[response_info, response, trans_id]
 
recv_async return value
Top

1.14 - recv_asyncs (internal services only)

Elixir
:cloudi_service.recv_asyncs(dispatcher,
                            timeout, trans_id_list, consume)
Erlang
cloudi_service:recv_asyncs(Dispatcher :: pid(),
                           Timeout :: non_neg_integer() | 'undefined' |
                                      'limit_min' | 'limit_max',
                           TransIdList :: list(<<_:128>>),
                           Consume :: boolean()) ->
    {'ok', list({ResponseInfo :: any(), Response :: any(),
                 TransId :: <<_:128>>})} |
    {'error', Reason :: atom()}.
 
Receive many asynchronous service request responses

Internal (Elixir/Erlang-only) services can block to receive multiple asynchronous service request responses. By default, the recv_asyncs function will consume the service request so it is not accessible with the same function call in the future. The TransId of the service request is always returned for any external use or tracking of the request or response. The recv_asyncs function is not implemented in other languages to avoid unbounded memory consumption and caching/heap allocation impossibilities.

Top

1.15 - return

C
int cloudi_return(cloudi_instance_t * api,
                  int const request_type,
                  char const * const name,
                  char const * const pattern,
                  void const * const response_info,
                  uint32_t const response_info_size,
                  void const * const response,
                  uint32_t const response_size,
                  uint32_t timeout,
                  char const * const trans_id,
                  char const * const source,
                  uint32_t const source_size);
C++
int CloudI::API::return_(int const request_type,
                         STRING name,
                         STRING pattern,
                         void const * const response_info,
                         uint32_t const response_info_size,
                         void const * const response,
                         uint32_t const response_size,
                         uint32_t timeout,
                         char const * const trans_id,
                         char const * const source,
                         uint32_t const source_size) const;
Elixir
:cloudi_service.return(dispatcher, request_type, name, pattern,
                       response_info, response,
                       timeout, trans_id, source)
Erlang
cloudi_service:return(Dispatcher :: pid(),
                      RequestType :: 'send_async' | 'send_sync',
                      Name :: string(),
                      Pattern :: string(),
                      ResponseInfo :: any(),
                      Response :: any(),
                      Timeout :: non_neg_integer(),
                      TransId :: <<_:128>>,
                      Source :: pid()) ->
    none().
Go
func (api *cloudi.Instance) Return(requestType int,
                                   name, pattern string,
                                   responseInfo, response []byte,
                                   timeout uint32,
                                   transId [16]byte,
                                   source cloudi.Source)
Java
void org.cloudi.API.return_(Integer request_type,
                            String name, String pattern,
                            byte[] response_info, byte[] response,
                            Integer timeout, byte[] transId,
                            OtpErlangPid source);
JavaScript
CloudI.API.return_(request_type, name, pattern,
                   response_info, response, timeout, trans_id, source);
Perl
CloudI::API->return_($request_type, $name, $pattern,
                     $response_info, $response,
                     $timeout, $trans_id, $source);
PHP
\CloudI\API::return_($request_type, $name, $pattern,
                     $response_info, $response,
                     $timeout, $trans_id, $source);
Python
cloudi.API.return_(request_type, name, pattern, response_info, response,
                   timeout, trans_id, source)
Ruby
CloudI::API.return_(request_type, name, pattern, response_info, response,
                    timeout, trans_id, source)
 
Return a received service request response

Return a response to a service request. The return function will throw a caught exception so that the request handler execution is aborted after returning the service request response. The simplest and preferred way to return a response within an Erlang service is to utilize the cloudi_service_handle_request functon return values used by the cloudi_service behavior. You can also utilize the request handler return value for the response in the programming languages Java, Python, and Ruby. However, within the external services it is more explicit (i.e., easier to understand the source code) when the source code uses the return functions.

If the service is configured with the request_timeout_adjustment option set to true (the default is false), the request handler execution time will automatically decrement the request timeout, after the request has been handled. If the service is configured with the response_timeout_adjustment option set to true (the default is false), the response timeout is automatically decremented based on the sender-side's timing (more accurate).

Top

1.16 - forward

C
int cloudi_forward(cloudi_instance_t * api,
                   int const request_type,
                   char const * const name,
                   void const * const request_info,
                   uint32_t const request_info_size,
                   void const * const request,
                   uint32_t const request_size,
                   uint32_t timeout,
                   int8_t const priority,
                   char const * const trans_id,
                   char const * const source,
                   uint32_t const source_size);
C++
int CloudI::API::forward_(int const request_type,
                          STRING name,
                          void const * const request_info,
                          uint32_t const request_info_size,
                          void const * const request,
                          uint32_t const request_size,
                          uint32_t timeout,
                          int8_t const priority,
                          char const * const trans_id,
                          char const * const source,
                          uint32_t const source_size) const;
Elixir
:cloudi_service.forward(dispatcher, request_type, name,
                        request_info, request,
                        timeout, priority, trans_id, source)
Erlang
cloudi_service:forward(Dispatcher :: pid(),
                       RequestType :: 'send_async' | 'send_sync',
                       Name :: string(),
                       RequestInfo :: any(),
                       Request :: any(),
                       Timeout :: non_neg_integer(),
                       Priority :: integer(),
                       TransId :: <<_:128>>,
                       Source :: pid()) ->
    none().
Go
func (api *cloudi.Instance) Forward(requestType int,
                                    name string,
                                    requestInfo, request []byte,
                                    timeout uint32,
                                    priority int8,
                                    transId [16]byte,
                                    source cloudi.Source)
Java
Response org.cloudi.API.forward_(Integer request_type, String name,
                                 byte[] request_info, byte[] request,
                                 Integer timeout, Byte priority,
                                 byte[] transId, OtpErlangPid source);
JavaScript
CloudI.API.forward_(request_type, name, request_info, request,
                    timeout, priority, trans_id, source);
Perl
CloudI::API->forward_($request_type, $name, $request_info, $request,
                      $timeout, $priority, $trans_id, $source);
PHP
\CloudI\API::forward_($request_type, $name, $request_info, $request,
                      $timeout, $priority, $trans_id, $source);
Python
cloudi.API.forward_(request_type, name, request_info, request,
                    timeout, priority, trans_id, source)
Ruby
CloudI::API.forward_(request_type, name, request_info, request,
                     timeout, priority, trans_id, source)
 
Forward a received service request to a single service process whose service name pattern subscription matches the provided service name

Forward the service request to a different destination, possibly with different parameters (e.g., a completely different request). The forward function will throw a caught exception so that the request handler execution is aborted after forwarding the service request. The simplest and preferred way to forward a request within an Erlang service is to utilize the cloudi_service_handle_request functon return values used by the cloudi_service behavior. All external services must use a forward function when forwarding a request.

If the service is configured with the request_timeout_adjustment option set to true (the default is false), the request handler execution time will automatically decrement the request timeout, after the request has been handled. If the service is configured with the response_timeout_adjustment option set to true (the default is false), the response timeout is automatically decremented based on the sender-side's timing (more accurate).

Top

1.17 - poll (external services only)

C
int cloudi_poll(cloudi_instance_t * api,
                int timeout);
C++
int CloudI::API::poll(int timeout = -1);
Go
func (api *cloudi.Instance) Poll(timeout int32) (bool, error)
Java
boolean org.cloudi.API.poll(int timeout);
JavaScript
var callback = function (timeout_) {};
CloudI.API.poll(callback, timeout);
Perl
CloudI::API->poll($timeout);
PHP
\CloudI\API::poll($timeout = -1);
Python
cloudi.API.poll(timeout=-1)
Ruby
CloudI::API.poll(timeout=nil)
 
Handle incoming service requests

External services use the poll function to accept service requests while blocking execution until either the timeout value expires or the service terminates. The execution time before the first poll function call is service initialization. The timeout value is specified in milliseconds. A timeout value of 0 can be used to not block on the poll function call. If the timeout value is not provided a value of -1 is used to make the poll function call block until service termination (if the programming language allows default function argument or function overloading). A boolean true value is used as the poll function return value if a timeout occurred (a non zero return value if the return value is an integer). A boolean false value is used as the poll function return value if service termination is in-progress.

Top

1.18 - shutdown

C
int cloudi_shutdown(cloudi_instance_t * api,
                    char const * const reason);
C++
int CloudI::API::shutdown() const;
int CloudI::API::shutdown(STRING reason) const;
Elixir
% cloudi_service behaviour return values:
{:stop, :shutdown}
{:stop, {:shutdown, 'reason'}}
{:stop, :shutdown, state}
{:stop, {:shutdown, 'reason'}, state}
Erlang
% cloudi_service behaviour return values:
{stop, shutdown}
{stop, {shutdown, "reason"}}
{stop, shutdown, State}
{stop, {shutdown, "reason"}, State}
Go
func (api *cloudi.Instance) Shutdown(extra ...interface{}) error
Java
void org.cloudi.API.shutdown();
void org.cloudi.API.shutdown(String reason);
JavaScript
CloudI.API.shutdown(callback, reason);
Perl
CloudI::API->shutdown($reason);
PHP
\CloudI\API::shutdown($reason = null);
Python
cloudi.API.shutdown(reason=None)
Ruby
CloudI::API.shutdown(reason=nil)
 
Cause the service instance to have a successful shutdown

The shutdown functionality provides a way of successfully stopping the service instance when no error has occurred. All service processes are stopped and the service is removed without causing any service process restarts. An optional reason string may be provided to log the reason for the service shutdown.

Top

CloudI Service API - Controlling CloudI

2.0 - Introduction

When CloudI is first started, the configuration file at /usr/local/etc/cloudi/cloudi.conf is used to determine what Access Control Lists (ACLs) should be used for services, what services should be started, what nodes should be connected, and what logging should occur. All the configuration functionality for CloudI can be done dynamically, after startup, with the CloudI Service API. A typical way to use the Service API is with either erlang terms or JSON-RPC over HTTP (using cloudi_service_api_requests and cloudi_service_http_cowboy). The CloudI Service API can also be accessed directly within the Erlang VM by using the cloudi_service_api module.

Protocol Example
Erlang

curl http://localhost:6464/cloudi/api/rpc/services.erl

JSON‑RPC

curl -X POST -d '{"method": "services", "params":[], "id": 1}' http://localhost:6464/cloudi/api/rpc.json

The data returned in both examples is Erlang terms within a string. All of the examples below use the Erlang protocol.

Top

2.1 - acl_add

curl -X POST -d '[{sensitive, ["/accouting/*", "/finance/*"]}]' http://localhost:6464/cloudi/api/rpc/acl_add.erl

Add more ACL entries to be later used when starting services. An ACL entry is an Erlang atom() -> list(atom() | string()) relationship which provides a logical grouping of service name patterns (e.g., {api, ["/cloudi/api/*"]}).

Top

2.2 - acl_remove

curl -X POST -d '[sensitive]' http://localhost:6464/cloudi/api/rpc/acl_remove.erl

Remove ACL entries that are no longer needed. Running services will retain their configuration, so this impacts services that are started in the future.

Top

2.3 - acl

curl http://localhost:6464/cloudi/api/rpc/acl.erl

List all the current ACL entries as lists of service name patterns.

Top

2.4 - service_subscriptions

curl -X POST -d '"6a675470-7a1f-11e2-d40e-a5dd00000058"' http://localhost:6464/cloudi/api/rpc/service_subscriptions.erl

List the subscriptions a service instance has initiated.

Top

2.5 - services_add

curl -X POST -d '[{internal, "/tests/http_req/", cloudi_service_request_rate, [{service_name, "/tests/http_req/ruby.xml/get"}, {request_rate, dynamic}], lazy_closest, 5000, 5000, 5000, undefined, undefined, 1, 5, 300, [{duo_mode, true}]}]' http://localhost:6464/cloudi/api/rpc/services_add.erl

Start services and return their Service UUIDs. Provide service configuration using the same syntax found in the configuration file (i.e., /usr/local/etc/cloudi/cloudi.conf). Internal services will need to be located in a code path that the running Erlang VM is aware of (see code_path_add). The syntax of the configuration entries is shown below:

% proplist format with cloudi_service_api types
[{type, internal | external},               % inferred from module or file_path
 {prefix, cloudi:service_name_pattern()},   % default is "/"
 {module, atom() | file:filename()},        % internal service only
 {file_path, file:filename()},              % external service only
 {args, list()},                            % default is []
 {env, list({string(), string()})},         % default is []
 {dest_refresh, dest_refresh()},            % default is immediate_closest
 {protocol, default | local | tcp | udp},   % default is local
 {buffer_size, default | pos_integer()},    % default is 65536 bytes
 {timeout_init, 101..4294967195},           % default is 5000 milliseconds
 {timeout_async, 499..4294967295},          % default is 5000 milliseconds
 {timeout_sync, 499..4294967295},           % default is 5000 milliseconds
 {dest_list_deny, dest_list()},             % default is undefined
 {dest_list_allow, dest_list()},            % default is undefined
 {count_process, pos_integer() | float()},  % default is 1
 {count_thread, pos_integer() | float()},   % default is 1
 {max_r, non_neg_integer()},                % default is 5
 {max_t, seconds()},                        % default is 300 seconds
 {options, service_options_internal() |     % default is []
           service_options_external()}]

% internal service tuple format
{internal,
 (ServiceNamePrefix),
 (ErlangModuleName),
 (ModuleInitializationList),
 (DestinationRefreshMethod),
 (InitializationTimeout in milliseconds),
 (DefaultAsynchronousTimeout in milliseconds),
 (DefaultSynchronousTimeout in milliseconds),
 (DestinationDenyACL),
 (DestinationAllowACL),
 (ProcessCount),
 (MaxR),
 (MaxT in seconds),
 (ServiceOptionsPropList)}

% external service tuple format
{external,
 (ServiceNamePrefix),
 (ExecutableFilePath),
 (ExecutableCommandLineArguments),
 (ExecutableEnvironmentalVariables),
 (DestinationRefreshMethod),
 (Protocol, use 'default'),
 (ProtocolBufferSize, use 'default'),
 (InitializationTimeout in milliseconds),
 (DefaultAsynchronousTimeout in milliseconds),
 (DefaultSynchronousTimeout in milliseconds),
 (DestinationDenyACL),
 (DestinationAllowACL),
 (ProcessCount),
 (ThreadCount),
 (MaxR),
 (MaxT in seconds),
 (ServiceOptionsPropList)}

The ACL lists contain either atoms that reference the current ACL configuration or pattern strings. The ProcessCount and ThreadCount can be specified as integers for an exact count or as a floating point number to provide a CPU multiplier (X < 1.0 is round, X > 1.0 is floor). MaxR is the maximum restarts allowed within MaxT seconds (same parameters used by Erlang supervisors). The ServiceOptionsPropList provides the configurable defaults:

Timeout configuration values in milliseconds may be provided as 'limit_min' or 'limit_max' to use the extreme values (be careful, since 'limit_max' is the equivalent of 49.7 days and no one wants to wait 49.7 days to discover a failure). This is possible with the configuration values timeout_init, timeout_async, timeout_sync and the service configuration options values request_timeout_immediate_max, response_timeout_immediate_max, timeout_terminate, restart_delay, monkey_latency .

Option Default Details
priority_default 0 -128(high) ≤ priority ≤ 127(low)
queue_limit undefined A limit on the total number of incoming service requests that are queued while the service is busy (limits memory consumption).
queue_size undefined A limit on the total memory consumption of incoming service requests that are queued while the service is busy (in kilobytes).
rate_request_max undefined A limit on the incoming service request rate (in requests per second). When set to a list ([]) options can be provided or it can be set to a number value.
dest_refresh_start 500 Delay after startup (in milliseconds) before requesting the initial service group membership (when using a lazy destination refresh method).
dest_refresh_delay 300000 Maximum possible time (in milliseconds) for a service death to remove service group membership (when using a lazy destination refresh method).
request_name_lookup sync Specify whether the service name lookup is sync or async during the timeout period.
request_timeout_adjustment false Should the service request handler execution time decrement the request timeout, after the request has been handled.
request_timeout_immediate_max 20001 Defines the maximum timeout (in milliseconds) considered "immediate". A service request timeout that is greater than or equal to this value causes the destination to be monitored to avoid timer memory consumption when a destination dies.
response_timeout_adjustment false Should the service's incoming response timeout be automatically decremented based on the sender-side's timing (more accurate).
response_timeout_immediate_max 20001 Defines the maximum timeout (in milliseconds) considered "immediate". A service request response timeout that is greater than or equal to this value will send a null response instead of discarding a null response and relying on the sending-side's timer expiring.
count_process_dynamic false Dynamically adjust the number of processes used within the service instance based on the service request rate that occurs. When set to a list ([]) options can be provided.
fatal_exceptions true Uncaught exceptions should cause the service to restart instead of causing a null response. A programming language's fatal exception types will always cause a service restart when this option is set to false.
fatal_timeout false A service request timeout is fatal in the service handling the service request. Enabling this ensures the service request execution is terminating though it may impact the consistency of global state usage if the service is writing to global state and has a timeout during its execution.
fatal_timeout_delay 0 Provide an offset on the fatal timeout value to avoid premature failure of the service when the service request is handled during a time period close to the service request timeout value.
timeout_terminate undefined Termination timeout (in milliseconds) for all the configured service processes. When the termination timeout is not set, an upper-bound is used to ensure that the configured service lifetime is finite when errors occur ((1000 * MaxT) / MaxR - 100, if MaxR > 0 and MaxT > 0). When MaxR is 0 the default termination timeout (not an upper-bound) is (1000 * MaxT) - 100, if MaxT > 0, or 2000, if MaxT = 0. All default termination timeout values are clamped to the range [10..60000].
restart_all false Restart all processes when one process fails.
restart_delay false Delay the restart of the service after a failure, to avoid spurious failures when global resources require extra time before they are used with a new service instance. When set to a list ([]) options can be provided.
critical false Stop the CloudI node if the service fails by being unable to restart (all possible restarts (MaxR) already occurred (in MaxT seconds)).
scope default The scope (an Erlang atom) is the scope which is used for all service name lookups and subscriptions. If you use a unique scope, you can isolate your service and reduce contention when using an immediate destination refresh method.
monkey_latency false Add latency to all service requests and info messages for systems testing. If set to 'system', use the settings within the cloudi_core Erlang application configuration. When set to a list ([]) options can be provided.
monkey_chaos false Add instability to the service for testing systems fault tolerance. If set to 'system', use the settings within the cloudi_core Erlang application configuration. When set to a list ([]) options can be provided.
bind false Bind execution to specific logical processors. Usage requires the Erlang VM was started with the +sbt command-line argument to bind Erlang process scheduler threads to logical processors (e.g., +sbt db). When set to true the logical processors are assigned using round-robin order. When set to a string value all logical processors are provided explicitly.
limit [] (external services only) A list of resource limits to set for the OS processes an external service creates.
owner [] (external services only) Set the owner of any OS processes an external service creates.
nice 0 (external services only) Set the nice value of any OS processes an external service creates.
cgroup undefined (external services only) Set the cgroup membership of any OS processes an external service creates.
chroot undefined (external services only) Set the root directory of any OS processes an external service creates.
syscall_lock undefined (external services only) Set the permitted syscalls of any OS processes an external service creates.
directory undefined (external services only) Set the current working directory of any OS processes an external service creates.
aspects_init_after [] A list of Erlang functions to call in-order after the service initialization (after an internal service has executed cloudi_service_init/3 or an external service has executed the poll function).
aspects_request_before [] A list of Erlang functions to call in-order before the service request function executes (before an internal service has executed cloudi_service_handle_request/11 or an external service has executed the callback function).
aspects_request_after [] A list of Erlang functions to call in-order after the service request function executes (after an internal service has executed cloudi_service_handle_request/11 or an external service has executed the callback function).
aspects_info_before [] (internal services only) A list of Erlang functions to call in-order before the cloudi_service_handle_info/3 function executes.
aspects_info_after [] (internal services only) A list of Erlang functions to call in-order after the cloudi_service_handle_info/3 function executes.
aspects_terminate_before [] A list of Erlang functions to call in-order before the service termination (before an internal service has executed cloudi_service_terminate/2 or an external service's process terminates).
aspects_suspend [] A list of Erlang functions to call in-order before the service is suspended.
aspects_resume [] A list of Erlang functions to call in-order before the service is resumed.
duo_mode false (internal services only) Use two Erlang processes instead of one Erlang process, so that more incoming service throughput can be handled with low latency. If duo_mode is true, cloudi_service_handle_info/3 should contain no cloudi_service:send_sync or cloudi_service:recv_async function calls (cloudi_service:send_async_active or a separate Erlang process can be used instead).
hibernate false (internal services only) Always make the service Erlang processes hibernate to conserve memory by using more frequent garbage collections, if set to true. When set to a list ([]) options can be provided. Enabling hibernate may decrease overall service performance (the service may be less responsive with service requests taking longer when examining the average response time), though it can improve worst-case service response time (i.e., the maximum response time) by avoiding less frequent garbage collections that create larger amounts of latency.
reload false (internal services only) Automatically reload the service module or any of the modules within a service application when the module's beam file is updated on the filesystem.
application_name undefined (internal services only) Use a different name when loading an Erlang application and its dependencies for this internal service.
automatic_loading true Determines if external services load modules automatically or if internal services load their dependencies automatically (which includes the associated Erlang application, the Erlang application dependencies, module loading, and module compilation if necessary).
dispatcher_pid_options [] erlang:spawn_opt/2 options to control memory usage of the service dispatcher process (priority, fullsweep_after, min_heap_size, min_bin_vheap_size, max_heap_size, sensitive, message_queue_data). The dispatcher process lifetime is tied to the service lifetime, making it a long-lived Erlang process that avoids the accumulation of memory. The dispatcher process is used in internal services to execute cloudi_service_init/4 if duo_mode is false (otherwise the info process is used).
init_pid_options [] (internal services only) erlang:spawn_opt/2 options to control memory usage of the service dispatcher process used during service initialization (priority, fullsweep_after, min_heap_size, min_bin_vheap_size, max_heap_size, sensitive, message_queue_data).
request_pid_uses 1 (internal services only) How many service requests to handle before utilizing a new Erlang process for a new incoming service request.
request_pid_options [] (internal services only) erlang:spawn_opt/2 options to control memory usage of the service request handling Erlang process (priority, fullsweep_after, min_heap_size, min_bin_vheap_size, max_heap_size, sensitive, message_queue_data).
info_pid_uses infinity (internal services only) How many info messages to handle before utilizing a new Erlang process for a new incoming info message. This Erlang process is the second process that is utilized when duo_mode is true (duo_mode requires that this is set to infinity).
info_pid_options [] (internal services only) erlang:spawn_opt/2 options to control memory usage of the info message handling Erlang process (priority, fullsweep_after, min_heap_size, min_bin_vheap_size, max_heap_size, sensitive, message_queue_data).

rate_request_max:

Option Default Details
period 5 Time period (in seconds) for determining the current rate of service requests.
value 1000 Maximum requests per second. If the current rate of service requests exceeds this limit the service process discards later service requests for the time remaining during the current time period.

count_process_dynamic:

Option Default Details
period 5 Time period (in seconds) for determining the current rate of service requests.
rate_request_max 1000 Maximum requests per second. If the current rate of service requests exceeds this limit the process count is increased as much as is required to keep the current rate of service requests under the maximum.
rate_request_min 100 Minimum requests per second. If the current rate of service requests is lower than this limit the process count is decreased as much as is required to keep the current rate of service requests above the minimum.
count_max 4.0 The maximum process count value that can be used for this service. An integer provides an absolute number while a floating point number is used as a CPU multiplier (in the same way as ProcessCount).
count_min 0.5 The minimum process count value that can be used for this service. An integer provides an absolute number while a floating point number is used as a CPU multiplier (in the same way as ProcessCount).

restart_delay:

Option Default Details
time_exponential_min 1 The first delay (in milliseconds) used during a restart with other delays increasing exponentially (using a base of 2 to provide binary exponential backoff).
time_exponential_max 500 A maximum (in milliseconds) for the exponential growth of the delay during a restart.
time_linear_min undefined A minimum delay (in milliseconds) used during a restart with other delays increasing linearly (using the time_linear_slope setting).
time_linear_max undefined A maximum (in milliseconds) for the linear growth of the delay during a restart.
time_linear_slope undefined The delay increase (in milliseconds) to be used for each restart with the 0-based restart count (i.e., the first restart will limited to the time_linear_min setting).
time_absolute undefined A single delay (in milliseconds) will be used for each restart and the delay will not change based on the restart count.

monkey_latency:

Option Default Details
time_uniform_min undefined Minimum amount of latency (in milliseconds) to be applied from a uniform distribution of random values.
time_uniform_max undefined Maximum amount of latency (in milliseconds) to be applied from a uniform distribution of random values.
time_gaussian_mean undefined Average amount of latency (in milliseconds) to be applied from a gaussian distribution of random values.
time_gaussian_stddev undefined Standard deviation of the gaussian distribution used for random latency values.
time_absolute 5000 Use a single value (in milliseconds) for the amount of latency.

monkey_chaos:

Option Default Details
probability_request 1.0 The probability a service request or info message will terminate a service process (50% == 0.5).
probability_day undefined The probability that a service process will be terminated at a random point during the day.

bind:

If a string value is provided all logical processors are assigned explicitly based on the string contents. The string may contain comma delimited non-negative integers with hyphens for integer ranges. All internal service processes and external service threads need exactly one logical processor specified. For example, the string value "1,13,3,15,5,17,7,19,9,21" could be used with an internal service that has 10 processes or an external service that has 10 total threads. To understand what logical processors are available, check the result of the Erlang function erlang:system_info(cpu_topology).

The bind service configuration option support depends on programming language support of bind functionality. That means bind use currently is supported by the ATS, C/C++, Erlang and Python/C CloudI APIs. Using the bind service configuration option should be preferred instead of using the cgroup service configuration option with "cpuset.cpus" because bind has cross-platform support with less CloudI execution user privileges required.

limit:

The value for any option can be an integer or 'infinity' to set the current limit or a list to set both the current and the maximum (setting the maximum requires that the user executing CloudI has permission to set the maximum limit) (e.g., {limit, [{stack, [{current, 8388608}, {maximum, infinity}]}]}).

Option Details
as The maximum size allowed for an OS process' virtual memory (address space) in bytes (total available memory).
core The size allowed for generating a core dump file in bytes from an OS process that crashes.
cpu An absolute CPU time limit in seconds for an OS process.
data The maximum size allowed for an OS process' data segment (initialized data, uninitialized data and heap data) in bytes.
fsize The maximum size allowed for any file that the OS process may use in bytes.
memlock The maximum size allowed for an OS process to lock into RAM in bytes.
msgqueue The maximum size allowed for an OS process to allocate for POSIX message queues is bytes.
nice A ceiling on an OS process' nice value (20 - ceiling == value where -20 ≤ ceiling ≤ 20).
nofile A value one greater than the maximum number of file descriptors the OS process may use.
nproc The maximum number of OS processes that may be created by the OS process.
rss The maximum size of the OS process' resident set (the number of virtual pages resident in RAM) in pages.
rtprio A ceiling on an OS process' real-time priority setting.
rttime An absolute CPU time limit to consume without making a blocking system call in microseconds for an OS process.
sigpending The maximum number of signals that may be queued by the OS process.
stack The maximum size of an OS process' stack in bytes.
vmem The maximum size of an OS process' mapped address space in bytes.

(see your OS manpage for setrlimit to check availability)

owner:

Set the owner of an external service's OS processes.

Option Details
user Set the user as either a positive integer user id or a string username. If a group is not specified in the owner service configuration option the user's group is used.
group Set the group as either a positive integer group id or a string group name.

cgroup:

Set the cgroup membership of an external service's OS processes.

Option Default Details
name undefined Set the cgroup name as a relative path (e.g., "group1/nested1").
parameters [] Set any parameters that should be set on the cgroup before OS processes are added. The parameter names are cgroups mount version-specific (e.g., v2 could use [{"memory.high", "64m"}] and v1 could use [{"memory.limit_in_bytes", "64m"}]).
update_or_create true Specify whether a cgroup should be created if it doesn't already exist.

syscall_lock:

Set the syscall names that are permitted in any OS processes an external service creates. All other syscalls will cause the OS process to exit with an uncatchable signal.

Option Default Details
type pledge | function Set the type of syscall names for configuration. The default is set based on the OS
(OpenBSD == pledge and Linux == function).
names [] The syscall names are provided as a list of strings.

The syscalls used by a service executable can change due to:

Examples of syscall_lock names used for only C/C++ CloudI API use (http_req test) with comments is below:

% Example 1: OpenBSD 6.8
{syscall_lock,
 [{type, pledge},
  {names,
   [
    % C/C++ CloudI API use
    "stdio",
    % dynamically linked library loading
    "rpath"
   ]}]}
% Example 2: Ubuntu 20.04 Linux (5.4.0 kernel, glibc 2.31)
{syscall_lock,
 [{type, function},
  {names,
   [
    % C/C++ CloudI API use
    "execve","clock_gettime","poll","read","write","close","exit",
    "brk","mmap","munmap", % (malloc/free)
    % Linux-specific
    "exit_group",
    % dynamically linked library loading
    "arch_prctl","mprotect","access","openat","stat","fstat","pread64"
    % static linking requires
    %"arch_prctl","mprotect","uname","readlink"
   ]}]}

Adding the syscall_lock configuration requires knowing all the syscalls a service may call. To log syscall information from a running service, it is possible to run the service with ktrace or strace:

OpenBSD: /usr/bin/ktrace -t c -f /tmp/ktrace.out SERVICE
Linux:   /usr/bin/strace -o /tmp/strace.log SERVICE

On Linux, the CloudI service configuration would have the file_path as "/usr/bin/strace" and the args as "-o /tmp/strace.log SERVICE" with SERVICE as the path to the external service executable.

aspects_init_after:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}". Will not be called if initialization does not occur successfully (aspects_terminate_before will still be called with the State as 'undefined').

Service Type Function
Internal
fun((Args :: list(),
     Prefix :: cloudi_service:service_name_pattern(),
     Timeout :: cloudi_service_api:timeout_milliseconds(),
     State :: any(),
     Dispatcher :: cloudi_service:dispatcher()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).
External
% the first function call has State =:= undefined
fun((CommandLine :: list(string()),
     Prefix :: cloudi:service_name_pattern(),
     Timeout :: cloudi_service_api:timeout_milliseconds(),
     State :: any()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_request_before:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}".

Service Type Function
Internal
fun((RequestType :: cloudi_service:request_type(),
     Name :: cloudi_service:service_name(),
     Pattern :: cloudi_service:service_name_pattern(),
     RequestInfo :: cloudi_service:request_info(),
     Request :: cloudi_service:request(),
     Timeout :: cloudi_service:timeout_value_milliseconds(),
     Priority :: cloudi_service:priority(),
     TransId :: cloudi_service:trans_id(),
     Source :: cloudi_service:source(),
     State :: any(),
     Dispatcher :: cloudi_service:dispatcher()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).
External
fun((RequestType :: cloudi_service:request_type(),
     Name :: cloudi_service:service_name(),
     Pattern :: cloudi_service:service_name_pattern(),
     RequestInfo :: cloudi_service:request_info(),
     Request :: cloudi_service:request(),
     Timeout :: cloudi_service:timeout_value_milliseconds(),
     Priority :: cloudi_service:priority(),
     TransId :: cloudi_service:trans_id(),
     Source :: cloudi_service:source(),
     State :: any()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_request_after:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}".

Service Type Function
Internal
fun((RequestType :: cloudi_service:request_type(),
     Name :: cloudi_service:service_name(),
     Pattern :: cloudi_service:service_name_pattern(),
     RequestInfo :: cloudi_service:request_info(),
     Request :: cloudi_service:request(),
     Timeout :: cloudi_service:timeout_value_milliseconds(),
     Priority :: cloudi_service:priority(),
     TransId :: cloudi_service:trans_id(),
     Source :: cloudi_service:source(),
     Result :: cloudi_service:request_result(),
     State :: any(),
     Dispatcher :: cloudi_service:dispatcher()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).
External
fun((RequestType :: cloudi_service:request_type(),
     Name :: cloudi_service:service_name(),
     Pattern :: cloudi_service:service_name_pattern(),
     RequestInfo :: cloudi_service:request_info(),
     Request :: cloudi_service:request(),
     Timeout :: cloudi_service:timeout_value_milliseconds(),
     Priority :: cloudi_service:priority(),
     TransId :: cloudi_service:trans_id(),
     Source :: cloudi_service:source(),
     Result :: cloudi_service:request_result(),
     State :: any()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_info_before:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}".

Service Type Function
Internal
fun((Request :: any(),
     State :: any(),
     Dispatcher :: cloudi_service:dispatcher()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_info_after:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}".

Service Type Function
Internal
fun((Request :: any(),
     State :: any(),
     Dispatcher :: cloudi_service:dispatcher()) ->
    {ok, NewState :: any()} |
    {stop, Reason :: any(), NewState :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_terminate_before:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}". Will still be called if initialization does not occur successfully (with State as 'undefined').

Service Type Function
Internal and External
fun((Reason :: any(),
     Timeout :: cloudi_service_api:timeout_milliseconds(),
     State :: any()) ->
    {ok, State :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_suspend and aspects_resume:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}". Each function will be called immediately before the service becomes suspended or resumed.

fun((State :: any()) ->
    {ok, State :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

hibernate:

Option Default Details
period 5 Time period (in seconds) for determining the current rate of service requests.
rate_request_min 1 Minimum requests per second. If the current rate of service requests is lower than this limit the service will hibernate.

Please see the configuration file /usr/local/etc/cloudi/cloudi.conf for more specific examples.

Top

2.6 - services_remove

curl -X POST -d '["6e81f0a6-7a1f-11e2-d40e-a5dd00000058", "6e81f0ec-7a1f-11e2-d40e-a5dd00000058"]' http://localhost:6464/cloudi/api/rpc/services_remove.erl

Provide the Service UUIDs for the services that should be stopped. The Service UUID is shown in the output of services. When the service is stopped, its running instance is removed from CloudI, but does not impact any other running instances (even if they are the same service module or binary).

When an internal service is removed and it is the last instance of the service module, the service module is purged to avoid later module conflicts. All instances of the internal service module should be configured in the same way (either a single module, an application, or a release with an application), so that the last instance is removed completely. If an application was used that is named the same as the service module, the application and its dependencies are removed (applications are stopped, modules are purged, and applications are unloaded) if the dependencies are not utilized by other applications. The same occurs if a release was used to start an application that contains the service module (the single top-level application of the release is used to determine dependencies, where the single top-level application within the release is the application that includes the service module).

Top

2.7 - services_restart

curl -X POST -d '["6a675470-7a1f-11e2-d40e-a5dd00000058"]' http://localhost:6464/cloudi/api/rpc/services_restart.erl

Restart the services with the UUIDs provided. The service UUID is shown in the output of services. When the service is restarted, the old instance is stopped and a new instance is started. During the restart delay, it is possible to lose queued service requests and received asynchronous responses. Keeping the state separate between the service instances is important to prevent failures within the new instance.

Top

2.8 - services_suspend

curl -X POST -d '["6a675470-7a1f-11e2-d40e-a5dd00000058"]' http://localhost:6464/cloudi/api/rpc/services_suspend.erl

Suspend the services with the UUIDs provided. Suspended services will not process more service requests (internal services will also not process info messages) and will only keep the data queued for processing in the future (allowing service requests to expire based on their timeout values). Suspended services can be updated while they are suspended. If a service is already suspended, services_suspend will have no effect.

Top

2.9 - services_resume

curl -X POST -d '["6a675470-7a1f-11e2-d40e-a5dd00000058"]' http://localhost:6464/cloudi/api/rpc/services_resume.erl

Resume the services with the UUIDs provided. Resumed services will continue processing service requests (internal services will also process info messages). If a service is not suspended, services_resume will have no effect.

Top

curl -X POST -d '"/tests/http/text/post"' http://localhost:6464/cloudi/api/rpc/services_search.erl

curl -X POST -d '{cloudi_service_test_msg_size, "/tests/msg_size/erlang"}' http://localhost:6464/cloudi/api/rpc/services_search.erl

List the service configuration parameters with each service's UUID that are receiving service requests for a given service name. To search within a custom scope, provide both the scope and the service name within a tuple.

Top

2.11 - services_status

curl -X POST -d '[]' http://localhost:6464/cloudi/api/rpc/services_status.erl

curl -X POST -d '["65762d3262a511e882359563d34a433e"]' http://localhost:6464/cloudi/api/rpc/services_status.erl

For each service UUID, provide the current uptime, downtime, interrupt and availability estimates at a single point in time. Each service has a {UUID, Status} pair with the Status described below:

[{type, internal | external},
 {prefix, service_name_pattern()},
 {module, atom()},                             % internal service only
 {file_path, file:filename()},                 % external service only
 {count_process, pos_integer()},               % count_process_dynamic may vary
 {count_thread, pos_integer()},                % external service only
 {pids_os, list(pos_integer())},               % external service only
 {pids_erlang, list(pid())},
 {size_erlang, pos_integer()},
 {suspended, boolean()},
 {uptime_total, nonempty_string()},
 {uptime_running, nonempty_string()},
 {uptime_processing, nonempty_string()},
 {uptime_restarts, nonempty_string()},
 {downtime_day_restarting, nonempty_string()},
 {downtime_week_restarting, nonempty_string()},
 {downtime_month_restarting, nonempty_string()},
 {downtime_year_restarting, nonempty_string()},
 {outages_day_restarting, nonempty_string()},
 {outages_week_restarting, nonempty_string()},
 {outages_month_restarting, nonempty_string()},
 {outages_year_restarting, nonempty_string()},
 {interrupt_day_updating, nonempty_string()},
 {interrupt_week_updating, nonempty_string()},
 {interrupt_month_updating, nonempty_string()},
 {interrupt_year_updating, nonempty_string()},
 {interrupt_day_suspended, nonempty_string()},
 {interrupt_week_suspended, nonempty_string()},
 {interrupt_month_suspended, nonempty_string()},
 {interrupt_year_suspended, nonempty_string()},
 {availability_day_total, nonempty_string()},
 {availability_day_running, nonempty_string()},
 {availability_day_updated, nonempty_string()},
 {availability_day_processing, nonempty_string()},
 {availability_week_total, nonempty_string()},
 {availability_week_running, nonempty_string()},
 {availability_week_updated, nonempty_string()},
 {availability_week_processing, nonempty_string()},
 {availability_month_total, nonempty_string()},
 {availability_month_running, nonempty_string()},
 {availability_month_updated, nonempty_string()},
 {availability_month_processing, nonempty_string()},
 {availability_year_total, nonempty_string()},
 {availability_year_running, nonempty_string()},
 {availability_year_updated, nonempty_string()},
 {availability_year_processing, nonempty_string()}]

The amount of time elapsed is provided in the string format "0 days 0 hours 0 seconds 0 nanoseconds" (with the largest 0 durations omitted and a singular unit if 1) for the uptime, downtime and interrupt values in the status. The total time shows how long the service has been running, including all restarts (in uptime_total). The running time shows how long the last restart of the service has been running (in uptime_running). The processing time shows how long the service has been processing service requests in the currently running service processes. The total number of restarts during the lifetime of the service is provided in uptime_restarts.

The service downtime spent restarting service processes is provided for the past day, week, month and year. The downtime elapsed due to restarting is measured as the time period from immediately before termination starts until the end of the initialization within the new service instance. The downtime values are used to determine the total availability values (e.g., availability_day_total, availability_week_total, availability_month_total and availability_year_total).

The service outage due to restarting is shown visually as a count in the time period for each "outages" string value. The outages_day_restarting value shows a single string character for each 30 minute segment after 00:00 UTC. Each restart causes an integer count to be updated in the string character associated with the restart time period. If a restart time period crosses a string character boundary all associated string characters will have an integer count updated. If the count is above 9 for a single character it is represented as a X character. The | (pipe) character is a cursor that shows the position of the current time in the string. Any characters after the cursor are representing the previous time period. The outages_week_restarting value shows a single string character for each 4 hour segment after 00:00 UTC on Monday of the current week. The outages_month_restarting value shows a single string character for each day of the current month. The outages_year_restarting value shows a single string character for each 1/3rd of a month during the current year.

The service interrupt spent updating service processes is provided for the past day, week, month and year. The interrupt elapsed will always overestimate the amount of time spent updating, by including extra coordination delay and should be considered the total time spent updating all service processes. Only the time spent updating the most recent service processes gets tracked as the interrupt (i.e., a fraction of the running time period which is after any restarts occurred). The interrupt values are used to determine the updated availability values (e.g., availability_day_updated, availability_week_updated, availability_month_updated and availability_year_updated).

The service interrupt spent with service processes suspended is provided for the past day, week, month and year. The processing time is the running time minus the updating interruptions and the suspended interruptions.

The amount of time the service has been running since any restarts occurred is used to determine the running availability values (e.g., availability_day_running, availability_week_running, availability_month_running and availability_year_running). All the availability values are percentages to describe the fraction of estimated uptime during each time period. All status data is determined using the same point in time, so the values may be compared among separate services.

The currently executing Erlang processes for each service are provided in pids_erlang. These Erlang processes that represent the service are the source pids provided when handling a CloudI service request. The current external service OS pids are provided in pids_os. Both lists are ordered by process index and may be smaller than expected (based on count_process and count_thread) if the service is not initialized.

The size_erlang value provides the size of the service within the Erlang VM as the size of all the pids_erlang Erlang processes in bytes. The size_erlang value represents the memory used for the most critical parts of a CloudI service in the Erlang VM related to receiving CloudI service requests and responses (i.e., the source pids). For internal services the size_erlang value will not include the size of the info_pid or the request_pid if the duo_mode service configuration option is false (the default value). If an internal service has the duo_mode service configuration option set to true, the size_erlang value will not include the size of the dispatcher_pid or the request_pid. That means the size_erlang value always represents the long-lived CloudI service memory in the Erlang VM.

Example output is available at
https://cloudi.org/config.html#services_status.

Top

2.12 - services_update

curl -X POST -d '[{"", [{module, cloudi_service_test_messaging}, {modules_load, [cloudi_service_test_messaging]}, {sync, false}]}]' http://localhost:6464/cloudi/api/rpc/services_update.erl

Update services while they are running without any interruption in their operation. Each update that should occur is specified as a {UUID, Options} pair with the Options described below:

% internal service update options
[{type, internal},
 {module, atom()},                            % only required field
 {module_state, module_state_internal()},     % update service state
 {sync, boolean()},                           % defaults to true
 {modules_load, list(atom())},                % defaults to []
 {modules_unload, list(atom())},              % defaults to []
 {code_paths_add, list(string())},            % defaults to []
 {code_paths_remove, list(string())},         % defaults to []
 {dest_refresh, dest_refresh()},              % update service configuration
 {timeout_init, timeout_milliseconds()},      % update service configuration
 {timeout_async, timeout_milliseconds()},     % update service configuration
 {timeout_sync, timeout_milliseconds()},      % update service configuration
 {dest_list_deny, dest_list()},               % update service configuration
 {dest_list_allow, dest_list()},              % update service configuration
 {options, service_update_plan_options_internal()}] % update service configuration options

Internal service updates require a module field provide the service module name. If the module is used in more than one service instance, provide the service UUID as "" to ensure the update applies to all service instances that use the module. If the module needs to be reloaded, it must be added to the modules_load list and the new module will be loaded using Erlang hot-code loading during the update. If the service state needs to be modified during the update, a function can be provided in the module_state field.

The module_state value for an internal service update can be provided as an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}" with the types below:

fun((OldModuleVersion :: list(any()),
     NewModuleVersion :: list(any()),
     OldState :: any()) ->
    {ok, NewState :: any()} |
    {error, Reason :: any()}).

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above. Providing the module version before and after the module update can help guide any upgrade or downgrade logic to return the appropriate state data. If the module_state function returns an error (or any other problem occurs during the update), the update does not change any service state, though the module and code_path modifications take effect after the update has been processed (i.e., modules_load, modules_unload, code_paths_add, code_paths_remove, all occur even when an update fails).

External service updates require either the type be set to external or that the file_path, args, or env field be set (to cause the OS process of the external service to restart during the update).

% external service update options
[{type, external},
 {file_path, string()},                       % restarts OS process
 {args, string()},                            % restarts OS process
 {env, list({string(), string()})},           % restarts OS process
 {sync, boolean()},                           % defaults to true
 {modules_load, list(atom())},                % defaults to []
 {modules_unload, list(atom())},              % defaults to []
 {code_paths_add, list(string())},            % defaults to []
 {code_paths_remove, list(string())},         % defaults to []
 {dest_refresh, dest_refresh()},              % update service configuration
 {timeout_init, timeout_milliseconds()},      % update service configuration
 {timeout_async, timeout_milliseconds()},     % update service configuration
 {timeout_sync, timeout_milliseconds()},      % update service configuration
 {dest_list_deny, dest_list()},               % update service configuration
 {dest_list_allow, dest_list()},              % update service configuration
 {options, service_update_plan_options_external()}] % update service configuration options

The sync field is used to determine whether the update occurs while no service processes are processing a service request. If global state is being used while handling a service request, it is safest to have sync set to true, which is the default. If a service sends synchronous service requests to itself, the update can timeout when sync is set to true but can succeed with sync set to false.

Do not set the sync field to false unless the service update requires it (due to the service having a cyclic dependency on itself, as described above). If an internal service update has the sync field set to false and the service's module is present in the modules_load list, the service's request process may use the new module prematurely if the request process is spawned slower than the module load (or if it calls any exported functions within the module) and that would occur before the module_state function gets called.

A subset of the service configuration options can be updated:

% internal service configuration options that are updatable
[{priority_default, priority()} |
 {queue_limit, undefined | non_neg_integer()} |
 {queue_size, undefined | pos_integer()} |
 {rate_request_max,
  list({period, period_seconds()} |
       {value, number()}) | number() | undefined} |
 {dest_refresh_start, dest_refresh_delay_milliseconds()} |
 {dest_refresh_delay, dest_refresh_delay_milliseconds()} |
 {request_name_lookup, sync | async} |
 {request_timeout_adjustment, boolean()} |
 {request_timeout_immediate_max,
  request_timeout_immediate_max_milliseconds()} |
 {response_timeout_adjustment, boolean()} |
 {response_timeout_immediate_max,
  response_timeout_immediate_max_milliseconds()} |
 {monkey_latency,
  list({time_uniform_min, latency_time_milliseconds()} |
       {time_uniform_max, latency_time_milliseconds()} |
       {time_gaussian_mean, latency_time_milliseconds()} |
       {time_gaussian_stddev, float()} |
       {time_absolute, latency_time_milliseconds()}) | system | false} |
 {monkey_chaos,
  list({probability_request, float()} |
       {probability_day, float()}) | system | false} |
 {dispatcher_pid_options,
  list({priority, low | normal | high} |
       {fullsweep_after, non_neg_integer()} |
       {min_heap_size, non_neg_integer()} |
       {min_bin_vheap_size, non_neg_integer()} |
       {max_heap_size, non_neg_integer() |
                       #{size => non_neg_integer(),
                         kill => boolean(),
                         error_logger => boolean()}} |
       {sensitive, boolean()} |
       {message_queue_data, off_heap | on_heap})} |
 {aspects_init_after, list(aspect_init_after_internal())} |
 {aspects_request_before, list(aspect_request_before_internal())} |
 {aspects_request_after, list(aspect_request_after_internal())} |
 {aspects_info_before, list(aspect_info_before_internal())} |
 {aspects_info_after, list(aspect_info_after_internal())} |
 {aspects_terminate_before, list(aspect_terminate_before_internal())} |
 {aspects_suspend, list(aspect_suspend())} |
 {aspects_resume, list(aspect_resume())} |
 {init_pid_options,
  list({priority, low | normal | high} |
       {fullsweep_after, non_neg_integer()} |
       {min_heap_size, non_neg_integer()} |
       {min_bin_vheap_size, non_neg_integer()} |
       {max_heap_size, non_neg_integer() |
                       #{size => non_neg_integer(),
                         kill => boolean(),
                         error_logger => boolean()}} |
       {sensitive, boolean()} |
       {message_queue_data, off_heap | on_heap})} |
 {request_pid_uses, infinity | pos_integer()} |
 {request_pid_options,
  list({priority, low | normal | high} |
       {fullsweep_after, non_neg_integer()} |
       {min_heap_size, non_neg_integer()} |
       {min_bin_vheap_size, non_neg_integer()} |
       {max_heap_size, non_neg_integer() |
                       #{size => non_neg_integer(),
                         kill => boolean(),
                         error_logger => boolean()}} |
       {sensitive, boolean()} |
       {message_queue_data, off_heap | on_heap})} |
 {info_pid_uses, infinity | pos_integer()} |
 {info_pid_options,
  list({priority, low | normal | high} |
       {fullsweep_after, non_neg_integer()} |
       {min_heap_size, non_neg_integer()} |
       {min_bin_vheap_size, non_neg_integer()} |
       {max_heap_size, non_neg_integer() |
                       #{size => non_neg_integer(),
                         kill => boolean(),
                         error_logger => boolean()}} |
       {sensitive, boolean()} |
       {message_queue_data, off_heap | on_heap})} |
 {hibernate,
  list({period, period_seconds()} |
       {rate_request_min, number()}) | boolean()} |
 {reload, boolean()}]
% external service configuration options that are updatable
[{priority_default, ?PRIORITY_HIGH..?PRIORITY_LOW} |
 {queue_limit, undefined | non_neg_integer()} |
 {queue_size, undefined | pos_integer()} |
 {rate_request_max,
  list({period, period_seconds()} |
       {value, number()}) | number() | undefined} |
 {dest_refresh_start, dest_refresh_delay_milliseconds()} |
 {dest_refresh_delay, dest_refresh_delay_milliseconds()} |
 {request_name_lookup, sync | async} |
 {request_timeout_adjustment, boolean()} |
 {request_timeout_immediate_max,
  request_timeout_immediate_max_milliseconds()} |
 {response_timeout_adjustment, boolean()} |
 {response_timeout_immediate_max,
  response_timeout_immediate_max_milliseconds()} |
 {monkey_latency,
  list({time_uniform_min, latency_time_milliseconds()} |
       {time_uniform_max, latency_time_milliseconds()} |
       {time_gaussian_mean, latency_time_milliseconds()} |
       {time_gaussian_stddev, float()} |
       {time_absolute, latency_time_milliseconds()}) | system | false} |
 {monkey_chaos,
  list({probability_request, float()} |
       {probability_day, float()}) | system | false} |
 {dispatcher_pid_options,
  list({priority, low | normal | high} |
       {fullsweep_after, non_neg_integer()} |
       {min_heap_size, non_neg_integer()} |
       {min_bin_vheap_size, non_neg_integer()} |
       {max_heap_size, non_neg_integer() |
                       #{size => non_neg_integer(),
                         kill => boolean(),
                         error_logger => boolean()}} |
       {sensitive, boolean()} |
       {message_queue_data, off_heap | on_heap})} |
 {aspects_init_after, list(aspect_init_after_external())} |
 {aspects_request_before, list(aspect_request_before_external())} |
 {aspects_request_after, list(aspect_request_after_external())} |
 {aspects_terminate_before, list(aspect_terminate_before_external())} |
 {aspects_suspend, list(aspect_suspend())} |
 {aspects_resume, list(aspect_resume())}]

For more information about a specific service configuration option refer to the service configuration documentation.

Top

2.13 - services

curl http://localhost:6464/cloudi/api/rpc/services.erl

List the service configuration parameters with each service's UUID. Any default service configuration options are omitted to keep the output concise. The service tuple format is provided in the output after any defaults have been assigned, as described in services_add.

% service tuple format list
[{UUID,
  {internal,
   Prefix :: service_name_pattern(),
   Module :: atom(),
   Args :: list(),
   DestRefresh :: dest_refresh(),
   TimeoutInit :: timeout_initialize_milliseconds(),
   TimeoutAsync :: timeout_send_async_milliseconds(),
   TimeoutSync :: timeout_send_sync_milliseconds(),
   DestListDeny :: dest_list(),
   DestListAllow :: dest_list(),
   CountProcess :: pos_integer(),
   MaxR :: non_neg_integer(),
   MaxT :: seconds(),
   Options :: service_options_internal()} |
  {external,
   Prefix :: service_name_pattern(),
   FilePath :: file:filename(),
   Args :: string(),
   Env :: list({string(), string()}),
   DestRefresh :: dest_refresh(),
   Protocol :: local | tcp | udp,
   BufferSize :: pos_integer(),
   TimeoutInit :: timeout_initialize_milliseconds(),
   TimeoutAsync :: timeout_send_async_milliseconds(),
   TimeoutSync :: timeout_send_sync_milliseconds(),
   DestListDeny :: dest_list(),
   DestListAllow :: dest_list(),
   CountProcess :: pos_integer(),
   CountThread :: pos_integer(),
   MaxR :: non_neg_integer(),
   MaxT :: seconds(),
   Options :: service_options_external()}}]
Top

2.14 - nodes_set

curl -X POST -d "[{reconnect_delay, 300}]" http://localhost:6464/cloudi/api/rpc/nodes_set.erl

Set the node configuration to specify how CloudI node connections are handled (using distributed Erlang).

Option Default Details
set all Whether the settings should be set on 'all' nodes or only the 'local' node.
nodes [] Exact node names for distributed Erlang connections
reconnect_start 300 The delay (in seconds) before attempting to connect to any distributed Erlang nodes, for the first time.
reconnect_delay 60 The delay (in seconds) before attempting to reconnect to any distributed Erlang nodes.
listen visible What distributed Erlang node connections should be monitored: 'visible' or 'all' (to include hidden nodes). If it is not set, it is inferred from connect.
connect visible What distributed Erlang node connections to create: 'visible' or 'hidden' (single link, not part of a fully connected network topology)
timestamp_type erlang What timestamp to use for generating unique service request transaction ids: 'erlang' (strictly monotonically increasing time), 'os' (OS time, weakest ordering possible) or 'warp' (time adjusted gradually)
discovery undefined Distributed Erlang node auto-discovery mechanism configuration.
cost [] A list of node-cost pairs. A default cost may be provided by using 'default' as the node name (e.g., [{default, 0.02225}]). The node cost is the amount of currency per hour (for tracking electricity costs it is: average kilowatts * currency per kilowatt-hour (kWh)).
cost_precision 2 Cost currency decimal places.
log_reconnect info The loglevel used to log the currently disconnected nodes before a reconnect attempt occurs.

discovery:

Option Default Details
multicast [] LAN multicast distributed Erlang node auto-discovery configuration.
ec2 [] Amazon Web Services (AWS) EC2 distributed Erlang node auto-discovery configuration (auto-discovery within a single region). Requires custom configuration that provides EC2 access credentials.

multicast:

Option Default Details
interface {0,0,0,0} The interface address (for UDP).
address {224,0,0,1} The LAN multicast address (for UDP).
port 4475 The multicast port (for UDP).
ttl 1 The multicast TTL (RFC, Linux).

LAN multicast auto-discovery requires that ntpd is running to keep the time of all LAN nodes as close as possible. However, all CloudI nodes should have ntpd running to keep all transaction ids similar (even if LAN multicast auto-discovery is not used).

ec2:

Option Default Details
access_key_id undefined AWS Access Key ID as a string (e.g. "${AWS_ACCESS_KEY_ID}").
secret_access_key undefined AWS Secret Access Key as a string (e.g. "${AWS_SECRET_ACCESS_KEY}").
host "ec2.amazonaws.com" AWS EC2 API endpoint (e.g., "ec2.${AWS_DEFAULT_REGION}.amazonaws.com").
groups [] EC2 security groups selection to limit the instances selected during EC2 distributed Erlang node auto-discovery.
tags [] EC2 tags selection to limit the instances selected during EC2 distributed Erlang node auto-discovery.

EC2 auto-discovery configuration requires setting 'access_key_id' and 'secret_access_key' with either 'groups' and/or 'tags'. Both 'groups' and 'tags' allow boolean expressions with nesting (specified as a list), e.g.:

[TAG | GROUP | OPERATOR]                 % implicitly an OR relationship
{'OR', [TAG | GROUP | OPERATOR]}         % OR boolean OPERATOR
{'AND', [TAG | GROUP | OPERATOR]}        % AND boolean OPERATOR

"security-group-name"                    % GROUP

"key1" | ["key3", "key4"]                % TAG (key names)

{"key2", "value2"} |                     % TAG (key/value combinations)
{["key5", "key6"], "value5"} |           %
{"key5", ["value5", "value6"]} |         %
{["key5", "key6"], ["value5", "value6"]} %

% groups example #1 (implicit OR relationship):
["security-group-a", "security-group-b"]

% groups example #2 (explicit AND relationship):
[{'AND', ["security-group-a", "security-group-b"]}]

% groups example #3 (explicit OR relationship, same functionally as #1):
[{'OR', ["security-group-a", "security-group-b"]}]

% tags example
[{'AND', [{"deployment", "development"}, {"cluster", "project42"}]}]

The EC2 auto-discovery requires that the security group(s) used by the instances expose distributed Erlang ports (with TCP rules):

The EC2 auto-discovery needs distributed Erlang longnames usage so -name should be used instead of -sname within the vm.args configuration file. All EC2 nodes that will be discovered need to be using longnames to avoid distributed Erlang error log messages. EC2 auto-discovery requires that all Erlang nodes that want to be discovered use the same -name and the same -setcookie.

Top

2.15 - nodes_get

curl http://localhost:6464/cloudi/api/rpc/nodes_get.erl

List the current nodes configuration in the same format provided to nodes_set with default settings ignored to keep the output concise.

Top

2.16 - nodes_add

curl -X POST -d "['cloud001@cluster1']" http://localhost:6464/cloudi/api/rpc/nodes_add.erl

Explicitly add a CloudI node name, so that services between all other CloudI nodes and the added nodes can send each other service requests. A nodes_add call is similar to nodes_set with [{set, all}, {nodes, Nodes}] where Nodes is the final list of all nodes (all connected nodes are modified).

Top

2.17 - nodes_remove

curl -X POST -d "['cloud001@cluster1']" http://localhost:6464/cloudi/api/rpc/nodes_remove.erl

Explicitly remove a CloudI node name. The CloudI node must have been added explicitly to be removed explicitly (not added by an auto-discovery method). A nodes_remove call is similar to nodes_set with [{set, all}, {nodes, Nodes}] where Nodes is the final list of all nodes (all connected nodes are modified).

Top

2.18 - nodes_alive

curl http://localhost:6464/cloudi/api/rpc/nodes_alive.erl

List all the CloudI nodes known to be connected.

Top

2.19 - nodes_dead

curl http://localhost:6464/cloudi/api/rpc/nodes_dead.erl

List all the CloudI nodes that are disconnected but expected to reconnect.

Top

2.20 - nodes_status

curl -X POST -d '[]' http://localhost:6464/cloudi/api/rpc/nodes_status.erl

curl -X POST -d "['cloudi@hostname']" http://localhost:6464/cloudi/api/rpc/nodes_status.erl

Provide information about all the CloudI nodes (including the local node). Each node has a {Node, Status} pair with the Status described below:

[{services_running, nonempty_string()},
 {services_restarted, nonempty_string()},
 {services_failed, nonempty_string()},
 {uptime, nonempty_string()},
 {uptime_cost_total, nonempty_string()},
 {uptime_cost_day, nonempty_string()},
 {uptime_cost_week, nonempty_string()},
 {uptime_cost_month, nonempty_string()},
 {uptime_cost_year, nonempty_string()},
 {tracked, nonempty_string()},
 {tracked_cost_total, nonempty_string()},
 {tracked_cost_day, nonempty_string()},
 {tracked_cost_week, nonempty_string()},
 {tracked_cost_month, nonempty_string()},
 {tracked_cost_year, nonempty_string()},
 {tracked_disconnects, nonempty_string()},
 {disconnected, boolean()},
 {downtime_day_disconnected, nonempty_string()},
 {downtime_week_disconnected, nonempty_string()},
 {downtime_month_disconnected, nonempty_string()},
 {downtime_year_disconnected, nonempty_string()},
 {outages_day_disconnected, nonempty_string()},
 {outages_week_disconnected, nonempty_string()},
 {outages_month_disconnected, nonempty_string()},
 {outages_year_disconnected, nonempty_string()},
 {availability_day, nonempty_string()},
 {availability_week, nonempty_string()},
 {availability_month, nonempty_string()},
 {availability_year, nonempty_string()}]

The local node will provide uptime results while remote nodes will provide tracked results and information related to being disconnected. If cost was provided with nodes_set the cumulative cost of the node will also be present. All the status information is provided from information stored on the local node.

The connection outage due to a disconnected remote node is shown visually as a count in the time period for each "outages" string value. The outages_day_disconnected value shows a single string character for each 30 minute segment after 00:00 UTC. Each disconnect causes an integer count to be updated in the string character associated with the disconnect time period. If a disconnect time period crosses a string character boundary all associated string characters will have an integer count updated. If the count is above 9 for a single character it is represented as a X character. The | (pipe) character is a cursor that shows the position of the current time in the string. Any characters after the cursor are representing the previous time period. The outages_week_disconnected value shows a single string character for each 4 hour segment after 00:00 UTC on Monday of the current week. The outages_month_disconnected value shows a single string character for each day of the current month. The outages_year_disconnected value shows a single string character for each 1/3rd of a month during the current year.

Example output is available at
https://cloudi.org/config.html#nodes_status.

Top

2.21 - nodes_status_reset (new in 2.0.8)

curl http://localhost:6464/cloudi/api/rpc/nodes_status_reset.erl

Remove all nodes status data for nodes that are currently dead. If a currently disconnected node (i.e., "dead") reconnects in the future, the remote node's status on the local node will make it appear to be a new remote node. If nodes that are currently dead were added manually (i.e., with 'nodes' configuration instead of 'discovery' configuration; without the use of Distributed Erlang node auto-discovery functionality), no reconnect (initiated from the local node) with the dead nodes will occur in the future after the nodes status reset occurs (unless the nodes are added manually again).

Top

2.22 - nodes

curl http://localhost:6464/cloudi/api/rpc/nodes.erl

List both the connected and disconnected CloudI nodes.

Top

2.23 - logging_set

curl -X POST -d '[{file, undefined}, {syslog, []}]' http://localhost:6464/cloudi/api/rpc/logging_set.erl

Set the logging configuration for a CloudI node.

% logging configuration options
[{file, undefined | string()},                % defaults to "cloudi.log"
 {file_sync,
  logging_file_sync_milliseconds()},          % defaults to 0 milliseconds
 {level, off | fatal | error |
         warn | info | debug | trace},        % defaults to trace
 {redirect, undefined | node()},              % defaults to undefined
 {syslog, undefined | list()},                % defaults to undefined
 {stdout, boolean()},                         % defaults to false
 {queue_mode_async, pos_integer()},           % defaults to 750
 {queue_mode_sync, pos_integer()},            % defaults to 1000
 {queue_mode_overload, pos_integer()},        % defaults to 10000
 {formatters, undefined | list()},            % defaults to undefined
 {log_time_offset,
  off | fatal | error |
  warn | info | debug | trace},               % defaults to off
 {aspects_log_before,
  list(aspects_log_before())},                % defaults to []
 {aspects_log_after,
  list(aspects_log_after())}]                 % defaults to []

file_sync:

Ensure all log file data has been written to the file by flushing any operating system buffers that contain pending write data, with the interval provided in milliseconds.

queue_mode_async:

The logger process will use asynchronous mode while its message queue length is less than the configured value.

queue_mode_sync:

The logger process will use synchronous mode when its message queue length is greater than the configured value. The gap between queue_mode_async and queue_mode_sync values avoids latency associated with changing the queue_mode. The sync queue_mode will cause all services to log more slowly while the logger process writes its pending log requests (messages) as quickly as possible.

queue_mode_overload:

The logger process will use overload mode when its message queue length is greater than the configured value. The value prevents CloudI from terminating due to extreme memory consumption.

log_time_offset:

Log when the Erlang VM adjusts its internal view of the system time with the size of the change in nanoseconds using the provided loglevel.

aspects_log_before:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}". Each function will be called before the log data is stored to disk.

fun((Level :: fatal | error | warn | info | debug | trace,
     Timestamp :: erlang:timestamp(),
     Node :: node(),
     Pid :: pid(),
     Module :: module(),
     Line :: pos_integer(),
     Function :: atom() | undefined,
     Arity :: arity() | undefined,
     MetaData :: list({atom(), any()}),
     LogMessage :: iodata()) ->
    ok.

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

aspects_log_after:

Provide a list of functions with the type specification shown below, with each function as either an anonymous Erlang function or an Erlang tuple "{module(), FunctionName :: atom()}". Each function will be called after the log data is stored to disk.

fun((Level :: fatal | error | warn | info | debug | trace,
     Timestamp :: erlang:timestamp(),
     Node :: node(),
     Pid :: pid(),
     Module :: module(),
     Line :: pos_integer(),
     Function :: atom() | undefined,
     Arity :: arity() | undefined,
     MetaData :: list({atom(), any()}),
     LogMessage :: iodata()) ->
    ok.

It is also possible to use the Erlang tuple "{{module(), FunctionName :: atom()}}" to specify an arity 0 function that returns an anonymous function as described above.

Top

2.24 - logging_file_set

curl -X POST -d '"different_filename.log"' http://localhost:6464/cloudi/api/rpc/logging_file_set.erl

Set the file path for logging output. If set to 'undefined', logging output will only be sent to syslog and formatters with an output module.

Top

2.25 - logging_level_set

curl -X POST -d 'warn' http://localhost:6464/cloudi/api/rpc/logging_level_set.erl

Modify the loglevel. The loglevel is changed with an Erlang module update internally so any logging statements that are turned off create no latency. If set to 'undefined' or 'off', logging output will only be sent to syslog and formatters with an output module. The available log levels values are: off, fatal, error, warn, info, debug, trace.

Top

2.26 - logging_stdout_set

curl -X POST -d 'true' http://localhost:6464/cloudi/api/rpc/logging_stdout_set.erl

Send all logging output to stdout.

Top

2.27 - logging_syslog_set

curl -X POST -d '[{identity, "CloudI"}, {facility, local0}, {level, trace}]' http://localhost:6464/cloudi/api/rpc/logging_syslog_set.erl

Send all logging output to syslog.

Option Default Details
identity "CloudI" String syslog identity (referred to as the APP-NAME in RFC5424).
facility local0 A syslog facility provided as a name (kernel | user | mail | daemon | auth0 | syslog | print | news | uucp | clock0 | auth1 | ftp | ntp | auth2 | auth3 | clock1 | local0 | local1 | local2 | local3 | local4 | local5 | local6 | local7) or as an integer (≥ 0).
level trace The syslog loglevel specified with a CloudI loglevel (with the CloudI loglevel -> syslog level equivalence below):
fatal -> critical (2),
error -> error (3),
warn -> warning (4),
info -> notice (5),
debug -> informational (6),
trace -> debug (7).
transport local The transport to use for syslog data (local | udp | tcp | tls).
transport_options [] The transport options to use for the syslog socket.
protocol rfc3164 The syslog protocol to use (rfc3164 | rfc5424).
path "/dev/log" The filesystem path to use for the local transport destination.
host {127,0,0,1} The host to use for the udp, tcp or tls transport destination.
port undefined The port to use for the transport destination (undefined uses the default port for the transport).
Top

2.28 - logging_formatters_set

curl -X POST -d '[{any, [{output, lager_file_backend}, {output_args, [{file, "lager.log"}]}, {formatter, lager_default_formatter}, {level, trace}]}]' http://localhost:6464/cloudi/api/rpc/logging_formatters_set.erl

Provide integration with lager-compatible formatters and lager-compatible backends. Each formatter entry specifies a list of modules for the source of the logging output to match against paired with formatter options (e.g., {[module1, module2], Options}). A separate formatter entry with an 'any' atom instead of a list of modules is used if the source of the logging output is not provided (e.g., {any, Options}). Use 'STDOUT' and 'STDERR' as a module name entry to control the stdout and stderr output coming from external services (logging output for both streams provides the OS pid next to the stream name, instead of a module line number). If only a formatter is specified (i.e., without an output module, a lager-compatible backend), the formatter transforms the logging output to be logged to the CloudI log file and/or syslog. If an output module is provided (that implements the gen_event Erlang/OTP behaviour), it will consume the logging output separately from the CloudI log file and syslog.

Option Default Details
level trace The formatter loglevel specified with a CloudI loglevel or a lager loglevel (with the lager loglevel -> CloudI loglevel equivalence below):
emergency -> fatal,
alert (becomes emergency) -> fatal,
critical (becomes emergency) -> fatal,
error -> error,
warning -> warn,
notice (becomes warning) -> warn,
info -> info,
debug -> debug,
none -> off.
output undefined The lager-compatible backend module which implements the gen_event Erlang/OTP behaviour.
output_args [] Arguments to provide to the output module's init/1 function.
output_max_r 5 The maximum number of restarts allowed for an output module that crashes.
output_max_t 300 The maximum time period for restarts to occur in when an output module crashes.
formatter undefined The lager-compatible formatter module which provides formatting for either the other logging methods (CloudI file and/or syslog) or the output module. A formatter module must export a format/2 function.
formatter_config [] Configuration provided to the formatter module's format/2 function's second parameter.
Top

2.29 - logging_redirect_set

curl -X POST -d 'cloudi@host' http://localhost:6464/cloudi/api/rpc/logging_redirect_set.erl

Redirect all local log output to a remote CloudI node. Use 'undefined' as the node name to log locally.

Top

2.30 - logging_status

curl http://localhost:6464/cloudi/api/rpc/logging_status.erl

Provide the current logging status. If any errors occur when writing to the log file, the error information is provided in the logging status output.

[{queue_mode, async | sync | overload},
 {queue_mode_sync_last_start, nonempty_string()},
 {queue_mode_sync_last_start_event, nonempty_string()},
 {queue_mode_sync_last_end, nonempty_string()},
 {queue_mode_sync_last_end_event, nonempty_string()},
 {queue_mode_sync_last_total, nonempty_string()},
 {queue_mode_overload_last_start, nonempty_string()},
 {queue_mode_overload_last_start_event, nonempty_string()},
 {queue_mode_overload_last_end, nonempty_string()},
 {queue_mode_overload_last_end_event, nonempty_string()},
 {queue_mode_overload_last_total, nonempty_string()},
 {time_offset_last_change, nonempty_string()},
 {time_offset_last_event, nonempty_string()},
 {file_messages_fatal, nonempty_string()},
 {file_messages_error, nonempty_string()},
 {file_messages_warn, nonempty_string()},
 {file_messages_info, nonempty_string()},
 {file_messages_debug, nonempty_string()},
 {file_messages_trace, nonempty_string()},
 {file_sync_fail_count, nonempty_string()},
 {file_sync_fail_types, nonempty_list(atom())},
 {file_write_fail_count, nonempty_string()},
 {file_write_fail_types, nonempty_list(atom())},
 {file_read_fail_count, nonempty_string()},
 {file_read_fail_types, nonempty_list(atom())}]

The queue_mode is async while logging is occurring asynchronously (the default). If the logger receives a large number of logging requests, it will change to sync (synchronous) mode which forces Erlang service processes to wait until their logging request has finished. The logger may still receive too many logging requests if it is the destination of a redirect and that could cause the logger to use overload mode. In overload mode, the logger will discard logging requests as quickly as possible (i.e., without any write to the log occurring) to reduce the extreme memory consumption until it is possible to go back to sync mode (each mode has a limit set by logging_set).

When the overload mode transitions to sync mode an error is logged with the duration of the overload mode. The most recent durations of both sync mode and overload mode are provided in the logging status output (if either modes were used during CloudI's runtime). The event timestamps are the same timestamps present in the log output while the start/end timestamps may change based on changes to the OS time. The logging status output will also provide the last OS time change in seconds (if an OS time change occurred during CloudI's runtime).

Top

2.31 - logging_status_reset

curl http://localhost:6464/cloudi/api/rpc/logging_status_reset.erl

Reset the logging status.

Top

2.32 - logging

curl http://localhost:6464/cloudi/api/rpc/logging.erl

List the current logging configuration in the same format provided to the configuration file with default settings ignored to keep the output concise.

Top

2.33 - code_path_add

curl -X POST -d '"/home/user/code/services"' http://localhost:6464/cloudi/api/rpc/code_path_add.erl

Add a directory to the CloudI Erlang VM code server's search paths. The path is always appended to the list of search paths (you should not need to rely on search path order because of unique naming).

Top

2.34 - code_path_remove

curl -X POST -d '"/home/user/code/services"' http://localhost:6464/cloudi/api/rpc/code_path_remove.erl

Remove a directory from the CloudI Erlang VM code server's search paths. This doesn't impact any running services, only services that will be started in the future.

Top

2.35 - code_path

curl http://localhost:6464/cloudi/api/rpc/code_path.erl

List all the CloudI Erlang VM code server search paths (in the same order the directories are searched).

Top

2.36 - code_status

curl http://localhost:6464/cloudi/api/rpc/code_status.erl

Provide information about the execution environment:

[{build_machine, nonempty_string()},
 {build_kernel_version, nonempty_string()},
 {build_operating_system, nonempty_string()},
 {build_erlang_otp_release, nonempty_string()},
 {build_cloudi_time, nonempty_string()},
 {build_cloudi_version, nonempty_string()},
 {build_cloudi_cxx_compiler_version, nonempty_string()},
 {build_cloudi_cxx_dependencies_versions, nonempty_string()},
 {build_erlang_erts_c_compiler_version, nonempty_string()},
 {install_erlang_erts_time, nonempty_string()},        % ISO8601 timestamp
 {install_erlang_kernel_time, nonempty_string()},      % ISO8601 timestamp
 {install_erlang_stdlib_time, nonempty_string()},      % ISO8601 timestamp
 {install_erlang_sasl_time, nonempty_string()},        % ISO8601 timestamp
 {install_erlang_compiler_time, nonempty_string()},    % ISO8601 timestamp
 {install_cloudi_time, nonempty_string()},             % ISO8601 timestamp
 {runtime_erlang_erts_version, nonempty_string()},
 {runtime_erlang_kernel_version, nonempty_string()},
 {runtime_erlang_stdlib_version, nonempty_string()},
 {runtime_erlang_sasl_version, nonempty_string()},
 {runtime_erlang_compiler_version, nonempty_string()},
 {runtime_erlang_compilation, nonempty_string()},      % "aot" | "jit"
 {runtime_cloudi_version, nonempty_string()},
 {runtime_machine_processors, pos_integer()},          % logical processors
 {runtime_start, nonempty_string()},                   % ISO8601 timestamp
 {runtime_clock, nonempty_string()},                   % ISO8601 timestamp
 {runtime_clock_offset, nonempty_string()},            % offset to HW clock
 {runtime_total, nonempty_string()},
 {runtime_cloudi_start, nonempty_string()},            % ISO8601 timestamp
 {runtime_cloudi_total, nonempty_string()},
 {runtime_cloudi_changes,                              % service file changes
  list([{type, internal | external},
        {file_age, nonempty_string()},
        {file_path, nonempty_string()},
        {file_loaded, boolean()},               % internal only (new in 2.0.8)
        {file_version, nonempty_list(byte())},  % internal only (new in 2.0.8)
        {service_ids, nonempty_list(service_id())}])}]

The runtime_cloudi_changes list shows CloudI service files that have changed on the filesystem after CloudI started. The files may have changed due to services_update use.

Example output is available at
https://cloudi.org/config.html#code_status.

Top