Shadesmar -- Fast C++ IPC using shared memory

栏目: IT技术 · 发布时间: 4年前

内容简介:An IPC library that uses the system's shared memory to pass messages. The communication paradigm is either publish-subscibe or RPC similar to ROS and ROS2. The library was built to be used withinRequired packages: Boost, MsgpackMessage Definition (

Shadesmar

An IPC library that uses the system's shared memory to pass messages. The communication paradigm is either publish-subscibe or RPC similar to ROS and ROS2. The library was built to be used within Project MANAS .

Required packages: Boost, Msgpack

Features

  • Multiple subscribers and publishers.

  • Multithreaded RPC support.

  • Uses a circular buffer to pass messages between processes.

  • Faster than using the network stack like in the case with ROS.

  • Read and write directly from GPU memory to shared memory.

  • Decentralized, without resource starvation .

  • Allows for both serialized message passing (using msgpack ) and to pass raw bytes.

  • No need to define external IDL files for messages. Use C++ classes as message definition.

Publish-Subscribe (serialized messages)

Message Definition ( custom_message.h ):

#include <shadesmar/message.h>

class InnerMessage : public shm::BaseMsg {
  public:
    int inner_val{};
    std::string inner_str{};
    SHM_PACK(inner_val, inner_str);

    InnerMessage() = default;
};

class CustomMessage : public shm::BaseMsg {
  public:
    int val{};
    std::vector<int> arr;
    InnerMessage im;
    SHM_PACK(val, arr, im);

    explicit CustomMessage(int n) {
      val = n;
      for (int i = 0; i < 1000; ++i) {
        arr.push_back(val);
      }
    }

    // MUST BE INCLUDED
    CustomMessage() = default;
};

Publisher:

#include <shadesmar/pubsub/publisher.h>
#include <custom_message.h>

int main() {
    shm::pubsub::Publisher<CustomMessage, 16 /* buffer size */ > pub("topic_name");

    CustomMessage msg;
    msg.val = 0;
    
    for (int i = 0; i < 1000; ++i) {
        msg.init_time(shm::SYSTEM); // add system time as the timestamp
        p.publish(msg);
        msg.val++;
    }
}

Subscriber:

#include <iostream>
#include <shadesmar/pubsub/subscriber.h>
#include <custom_message.h>

void callback(const std::shared_ptr<CustomMessage>& msg) {
    std::cout << msg->val << std::endl;
}

int main() {
    shm::pubsub::Subscriber<CustomMessage, 16 /* buffer size */ > sub("topic_name", callback);
    
    // Using `spinOnce` with a manual loop
    while(true) {
        sub.spinOnce();
    }
    // OR
    // Using `spin`
    sub.spin();
}

Publish-Subscribe (raw bytes)

Publisher:

#include <shadesmar/memory/copier.h>
#include <shadesmar/pubsub/publisher.h>

int main() {
    shm::memory::DefaultCopier cpy;
    shm::pubsub::PublisherBin<16 /* buffer size */ > pub("topic_name", &cpy);
    const uint32_t data_size = 1024;
    void *data = malloc(data_size);
    
    for (int i = 0; i < 1000; ++i) {
        p.publish(msg, data_size);
    }
}

Subscriber:

#include <shadesmar/memory/copier.h>
#include <shadesmar/pubsub/subscriber.h>

void callback(shm::memory::Ptr *msg) {
  // `msg->ptr` to access `data`
  // `msg->size` to access `size`

  // The memory will be free'd at the end of this callback.
  // Copy to another memory location if you want to persist the data.
  // Alternatively, if you want to avoid the copy, you can call
  // `msg->no_delete()` which prevents the memory from being deleted
  // at the end of the callback.
}

int main() {
    shm::memory::DefaultCopier cpy;
    shm::pubsub::SubscriberBin<16 /* buffer size */ > sub("topic_name", &cpy, callback);
    
    // Using `spinOnce` with a manual loop
    while(true) {
        sub.spinOnce();
    }
    // OR
    // Using `spin`
    sub.spin();
}

RPC

Server:

#include <shadesmar/rpc/server.h>

int add(int a, int b) {
  return a + b;
}

int main() {
  shm::rpc::Function<int(int, int)> rpc_fn("add_fn", add);

  while (true) {
    rpc_fn.serve_once();
  }

  // OR...

  rpc_fn.serve();
}

Client:

#include <shadesmar/rpc/client.h>

int main() {
  shm::rpc::FunctionCaller rpc_fn("add_fn");

  std::cout << rpc_fn(4, 5).as<int>() << std::endl;
}

Note:

  • shm::pubsub::Subscriber has a boolean parameter called extra_copy . extra_copy=true is faster for smaller (<1MB) messages, and extra_copy=false is faster for larger (>1MB) messages. For message of 10MB, the throughput for extra_copy=false is nearly 50% more than extra_copy=true . See _read_with_copy() and _read_without_copy() in include/shadesmar/pubsub/topic.h for more information.

  • queue_size must be powers of 2. This is due to the underlying shared memory allocator which uses a red-black tree. See include/shadesmar/memory/allocator.h for more information.

  • You may get this error while publishing: Increase max_buffer_size . This occurs when the default memory allocated to the topic buffer cannot store all the messages. The default buffer size for every topic is 256MB. You can access and modify shm::memory::max_buffer_size . The value must be set before creating a publisher.


以上所述就是小编给大家介绍的《Shadesmar -- Fast C++ IPC using shared memory》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

科学的极致:漫谈人工智能

科学的极致:漫谈人工智能

集智俱乐部 / 人民邮电出版社 / 2015-7 / 49.00元

集智俱乐部是一个从事学术研究、享受科学乐趣的探索者组成的团体,倡导以平等开放的态度、科学实证的精神进行跨学科的研究与交流,力图搭建一个中国的“没有围墙的研究所”。这些令人崇敬的、充满激情与梦想的集智俱乐部成员将带你了解图灵机模型、冯•诺依曼计算机体系结构、怪圈与哥德尔定理、通用人工智能、深度学习、人类计算与自然语言处理,与你一起展开一场令人热血沸腾的科学之旅。一起来看看 《科学的极致:漫谈人工智能》 这本书的介绍吧!

URL 编码/解码
URL 编码/解码

URL 编码/解码

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试

HSV CMYK 转换工具
HSV CMYK 转换工具

HSV CMYK互换工具