admin管理员组文章数量:1391981
I have a USB device that replies {0x00 0xFF 0x01 0x03}
when you send him {0x00,0xFF,0x01,0x02}
. I'd like to implement a ping test, sending commands and computing an average response time.
Here is my MCVE, connecting, sending a command, tracking for the reponse, doing this in a loop and computing an average speed.
#include <iostream>
#include <deque>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/thread.hpp>
class ReadHelper
{
boost::asio::serial_port& port;
size_t bufferSize;
char* read_msg_;
std::deque<char> m_received;
boost::mutex m_mutex;
boost::condition_variable m_var;
public:
ReadHelper(boost::asio::serial_port& port, size_t bufferSize) :
port(port),
bufferSize(bufferSize),
read_msg_( new char[bufferSize] )
{
}
~ReadHelper()
{
delete[] read_msg_;
}
std::chrono::steady_clock::time_point m_started;
inline long long elapsed()
{
auto elapsed = std::chrono::high_resolution_clock::now() - m_started;
return std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
}
long long wait_for_response()
{
boost::mutex::scoped_lock lock(m_mutex);
while ( m_received.size() < 4 )
{
m_var.timed_wait(lock, boost::posix_time::microseconds(1));
}
if ( m_received[0] == 0x00 &&
m_received[1] == (char)0xFF &&
m_received[2] == 0x01 &&
m_received[3] == 0x03 )
{
auto elapsed_ = elapsed();
std::cout << "==> Received ping response after " << elapsed_ << " us, took ~" << int(elapsed_/1000) << " ms <==" << std::endl;
m_received.erase(m_received.begin(), m_received.begin()+4);
return elapsed_;
}
return 0;
}
void read_complete(const boost::system::error_code& error, size_t bytes_transferred)
{
if (!error)
{
// read completed, so process the data
if (bytes_transferred != 0)
{
std::cout << "Read completed for " << bytes_transferred << " byte(s) after " << elapsed() << " us" << std::endl;
boost::mutex::scoped_lock lock(m_mutex);
m_received.insert(m_received.end(), read_msg_, read_msg_+bytes_transferred);
m_var.notify_all();
}
read_start(); // start waiting for another asynchronous read again
}
}
void read_start()
{
port.async_read_some(boost::asio::buffer(read_msg_, bufferSize),
boost::bind(&ReadHelper::read_complete,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
};
int main( int argc, char* argv[] )
{
if (argc != 2)
{
std::cout << "Missing port name argument" << std::endl;
return 1;
}
std::string portName = argv[1];
boost::asio::io_service io;
boost::asio::serial_port port(io);
try
{
port.open(portName);
}
catch(...)
{
std::cout << "Unable to open " << portName << std::endl;
return 1;
}
if (!port.is_open())
{
std::cout << "Unable to open " << portName << std::endl;
return 1;
}
port.set_option( boost::asio::serial_port_base::baud_rate( 256000 ) );
port.set_option( boost::asio::serial_port_base::parity( boost::asio::serial_port_base::parity::none ) );
port.set_option( boost::asio::serial_port_base::stop_bits( boost::asio::serial_port_base::stop_bits::one ) );
port.set_option( boost::asio::serial_port_base::character_size( 8 ) );
port.set_option( boost::asio::serial_port_base::flow_control( boost::asio::serial_port_base::flow_control::hardware ) );
ReadHelper helper(port, 1);
helper.read_start();
// run the IO service as a separate thread, so the main thread can block on standard input
boost::thread m_thread(boost::bind(&boost::asio::io_service::run, &io));
boost::this_thread::sleep( boost::posix_time::milliseconds( 500 ) );
std::vector<char> cmd{0x00,(char)0xFF,0x01,0x02};
long long total_time = 0;
size_t count = 5;
for ( size_t index = 0; index != count; ++index )
{
std::cout << std::endl << "Sending new ping command..." << std::endl;
helper.m_started = std::chrono::high_resolution_clock::now();
auto sent = boost::asio::write(port,boost::asio::buffer(cmd.data(),cmd.size()));
if (sent == cmd.size())
{
std::cout << "Done sending after " << helper.elapsed() << " us, waiting for response" << std::endl;
total_time += helper.wait_for_response();
}
else
{
return 2;
}
}
std::cout << std::endl << "Average speed is " << (double(total_time)/count)/1000.0 << " ms" << std::endl;
port.close();
m_thread.join();
return 0;
}
See, the buffer read size can be changed (second parameter of ReadHelper
ctor).
With a read buffer of size 1, the output is:
Sending new ping command...
Done sending after 370 us, waiting for response
Read completed for 1 byte(s) after 1591 us
Read completed for 1 byte(s) after 2188 us
Read completed for 1 byte(s) after 2671 us
Read completed for 1 byte(s) after 3255 us
==> Received ping response after 3865 us, took ~3 ms <==
Sending new ping command...
Done sending after 140 us, waiting for response
Read completed for 1 byte(s) after 758 us
Read completed for 1 byte(s) after 1173 us
Read completed for 1 byte(s) after 1493 us
Read completed for 1 byte(s) after 1762 us
==> Received ping response after 1926 us, took ~1 ms <==
Sending new ping command...
Done sending after 230 us, waiting for response
Read completed for 1 byte(s) after 766 us
Read completed for 1 byte(s) after 1222 us
Read completed for 1 byte(s) after 1933 us
Read completed for 1 byte(s) after 2512 us
==> Received ping response after 2880 us, took ~2 ms <==
Sending new ping command...
Done sending after 123 us, waiting for response
Read completed for 1 byte(s) after 456 us
Read completed for 1 byte(s) after 703 us
Read completed for 1 byte(s) after 985 us
Read completed for 1 byte(s) after 1290 us
==> Received ping response after 1543 us, took ~1 ms <==
Sending new ping command...
Done sending after 122 us, waiting for response
Read completed for 1 byte(s) after 473 us
Read completed for 1 byte(s) after 753 us
Read completed for 1 byte(s) after 1000 us
Read completed for 1 byte(s) after 3325 us
==> Received ping response after 3517 us, took ~3 ms <==
Average speed is 2.7462 ms
If I change the read buffer size from 1 to 4, the output is:
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 579 us
319 us, waiting for response
==> Received ping response after 1023 us, took ~1 ms <==
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 413 us
194 us, waiting for response
==> Received ping response after 1160 us, took ~1 ms <==
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 383 us
171 us, waiting for response
==> Received ping response after 790 us, took ~0 ms <==
Sending new ping command...
Done sending after 125 us, waiting for responseRead completed for 4
byte(s) after 392 us
==> Received ping response after 548 us, took ~0 ms <==
Sending new ping command...
Done sending after 135Read completed for 4 byte(s) after 355 us
us, waiting for response
==> Received ping response after 818 us, took ~0 ms <==
Average speed is 0.8678 ms
Much faster...this is not surprising, reading the 4 bytes at once makes read process faster (a single call to read_complete
instead of 4).
BUT, if I change to 5, the output is:
Sending new ping command...
Done sending after 198 us, waiting for response
Read completed for 4 byte(s) after 30050 us
==> Received ping response after 30414 us, took ~30 ms <==
Sending new ping command...
Done sending after 148 us, waiting for response
Read completed for 4 byte(s) after 28813 us
==> Received ping response after 29208 us, took ~29 ms <==
Sending new ping command...
Done sending after 206 us, waiting for response
Read completed for 4 byte(s) after 29507 us
==> Received ping response after 30087 us, took ~30 ms <==
Sending new ping command...
Done sending after 235 us, waiting for response
Read completed for 4 byte(s) after 28559 us
==> Received ping response after 29154 us, took ~29 ms <==
Sending new ping command...
Done sending after 130 us, waiting for response
Read completed for 4 byte(s) after 28912 us
==> Received ping response after 29524 us, took ~29 ms <==
Average speed is 29.6774 ms
Why boost takes such a long time to call read_complete
with a bigger buffer? It should return ASAP when the 4 reponse bytes are received, there is no reason for it to take so long just because the buffer is 1-byte bigger??
Is there a timeout that can be parametrized or anything that could speed this code? Is there a way to read any amount of available data without size constraint?
Note: This is just à MCVE, in my original code, the device is used is situations where it sends tones of data, a read buffer of size 4 is not enough (we end up loosing data, likelly because the PC does not read fast enough... a buffer greater than 4 (actually of 512) makes the PC read fast enough....but then it introduces an unacceptable latency to the {0x00 0xFF 0x01 0x03}
command. I'm trying to have both features work fast enough.
I have a USB device that replies {0x00 0xFF 0x01 0x03}
when you send him {0x00,0xFF,0x01,0x02}
. I'd like to implement a ping test, sending commands and computing an average response time.
Here is my MCVE, connecting, sending a command, tracking for the reponse, doing this in a loop and computing an average speed.
#include <iostream>
#include <deque>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/thread.hpp>
class ReadHelper
{
boost::asio::serial_port& port;
size_t bufferSize;
char* read_msg_;
std::deque<char> m_received;
boost::mutex m_mutex;
boost::condition_variable m_var;
public:
ReadHelper(boost::asio::serial_port& port, size_t bufferSize) :
port(port),
bufferSize(bufferSize),
read_msg_( new char[bufferSize] )
{
}
~ReadHelper()
{
delete[] read_msg_;
}
std::chrono::steady_clock::time_point m_started;
inline long long elapsed()
{
auto elapsed = std::chrono::high_resolution_clock::now() - m_started;
return std::chrono::duration_cast<std::chrono::microseconds>(elapsed).count();
}
long long wait_for_response()
{
boost::mutex::scoped_lock lock(m_mutex);
while ( m_received.size() < 4 )
{
m_var.timed_wait(lock, boost::posix_time::microseconds(1));
}
if ( m_received[0] == 0x00 &&
m_received[1] == (char)0xFF &&
m_received[2] == 0x01 &&
m_received[3] == 0x03 )
{
auto elapsed_ = elapsed();
std::cout << "==> Received ping response after " << elapsed_ << " us, took ~" << int(elapsed_/1000) << " ms <==" << std::endl;
m_received.erase(m_received.begin(), m_received.begin()+4);
return elapsed_;
}
return 0;
}
void read_complete(const boost::system::error_code& error, size_t bytes_transferred)
{
if (!error)
{
// read completed, so process the data
if (bytes_transferred != 0)
{
std::cout << "Read completed for " << bytes_transferred << " byte(s) after " << elapsed() << " us" << std::endl;
boost::mutex::scoped_lock lock(m_mutex);
m_received.insert(m_received.end(), read_msg_, read_msg_+bytes_transferred);
m_var.notify_all();
}
read_start(); // start waiting for another asynchronous read again
}
}
void read_start()
{
port.async_read_some(boost::asio::buffer(read_msg_, bufferSize),
boost::bind(&ReadHelper::read_complete,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
};
int main( int argc, char* argv[] )
{
if (argc != 2)
{
std::cout << "Missing port name argument" << std::endl;
return 1;
}
std::string portName = argv[1];
boost::asio::io_service io;
boost::asio::serial_port port(io);
try
{
port.open(portName);
}
catch(...)
{
std::cout << "Unable to open " << portName << std::endl;
return 1;
}
if (!port.is_open())
{
std::cout << "Unable to open " << portName << std::endl;
return 1;
}
port.set_option( boost::asio::serial_port_base::baud_rate( 256000 ) );
port.set_option( boost::asio::serial_port_base::parity( boost::asio::serial_port_base::parity::none ) );
port.set_option( boost::asio::serial_port_base::stop_bits( boost::asio::serial_port_base::stop_bits::one ) );
port.set_option( boost::asio::serial_port_base::character_size( 8 ) );
port.set_option( boost::asio::serial_port_base::flow_control( boost::asio::serial_port_base::flow_control::hardware ) );
ReadHelper helper(port, 1);
helper.read_start();
// run the IO service as a separate thread, so the main thread can block on standard input
boost::thread m_thread(boost::bind(&boost::asio::io_service::run, &io));
boost::this_thread::sleep( boost::posix_time::milliseconds( 500 ) );
std::vector<char> cmd{0x00,(char)0xFF,0x01,0x02};
long long total_time = 0;
size_t count = 5;
for ( size_t index = 0; index != count; ++index )
{
std::cout << std::endl << "Sending new ping command..." << std::endl;
helper.m_started = std::chrono::high_resolution_clock::now();
auto sent = boost::asio::write(port,boost::asio::buffer(cmd.data(),cmd.size()));
if (sent == cmd.size())
{
std::cout << "Done sending after " << helper.elapsed() << " us, waiting for response" << std::endl;
total_time += helper.wait_for_response();
}
else
{
return 2;
}
}
std::cout << std::endl << "Average speed is " << (double(total_time)/count)/1000.0 << " ms" << std::endl;
port.close();
m_thread.join();
return 0;
}
See, the buffer read size can be changed (second parameter of ReadHelper
ctor).
With a read buffer of size 1, the output is:
Sending new ping command...
Done sending after 370 us, waiting for response
Read completed for 1 byte(s) after 1591 us
Read completed for 1 byte(s) after 2188 us
Read completed for 1 byte(s) after 2671 us
Read completed for 1 byte(s) after 3255 us
==> Received ping response after 3865 us, took ~3 ms <==
Sending new ping command...
Done sending after 140 us, waiting for response
Read completed for 1 byte(s) after 758 us
Read completed for 1 byte(s) after 1173 us
Read completed for 1 byte(s) after 1493 us
Read completed for 1 byte(s) after 1762 us
==> Received ping response after 1926 us, took ~1 ms <==
Sending new ping command...
Done sending after 230 us, waiting for response
Read completed for 1 byte(s) after 766 us
Read completed for 1 byte(s) after 1222 us
Read completed for 1 byte(s) after 1933 us
Read completed for 1 byte(s) after 2512 us
==> Received ping response after 2880 us, took ~2 ms <==
Sending new ping command...
Done sending after 123 us, waiting for response
Read completed for 1 byte(s) after 456 us
Read completed for 1 byte(s) after 703 us
Read completed for 1 byte(s) after 985 us
Read completed for 1 byte(s) after 1290 us
==> Received ping response after 1543 us, took ~1 ms <==
Sending new ping command...
Done sending after 122 us, waiting for response
Read completed for 1 byte(s) after 473 us
Read completed for 1 byte(s) after 753 us
Read completed for 1 byte(s) after 1000 us
Read completed for 1 byte(s) after 3325 us
==> Received ping response after 3517 us, took ~3 ms <==
Average speed is 2.7462 ms
If I change the read buffer size from 1 to 4, the output is:
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 579 us
319 us, waiting for response
==> Received ping response after 1023 us, took ~1 ms <==
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 413 us
194 us, waiting for response
==> Received ping response after 1160 us, took ~1 ms <==
Sending new ping command...
Done sending after Read completed for 4 byte(s) after 383 us
171 us, waiting for response
==> Received ping response after 790 us, took ~0 ms <==
Sending new ping command...
Done sending after 125 us, waiting for responseRead completed for 4
byte(s) after 392 us
==> Received ping response after 548 us, took ~0 ms <==
Sending new ping command...
Done sending after 135Read completed for 4 byte(s) after 355 us
us, waiting for response
==> Received ping response after 818 us, took ~0 ms <==
Average speed is 0.8678 ms
Much faster...this is not surprising, reading the 4 bytes at once makes read process faster (a single call to read_complete
instead of 4).
BUT, if I change to 5, the output is:
Sending new ping command...
Done sending after 198 us, waiting for response
Read completed for 4 byte(s) after 30050 us
==> Received ping response after 30414 us, took ~30 ms <==
Sending new ping command...
Done sending after 148 us, waiting for response
Read completed for 4 byte(s) after 28813 us
==> Received ping response after 29208 us, took ~29 ms <==
Sending new ping command...
Done sending after 206 us, waiting for response
Read completed for 4 byte(s) after 29507 us
==> Received ping response after 30087 us, took ~30 ms <==
Sending new ping command...
Done sending after 235 us, waiting for response
Read completed for 4 byte(s) after 28559 us
==> Received ping response after 29154 us, took ~29 ms <==
Sending new ping command...
Done sending after 130 us, waiting for response
Read completed for 4 byte(s) after 28912 us
==> Received ping response after 29524 us, took ~29 ms <==
Average speed is 29.6774 ms
Why boost takes such a long time to call read_complete
with a bigger buffer? It should return ASAP when the 4 reponse bytes are received, there is no reason for it to take so long just because the buffer is 1-byte bigger??
Is there a timeout that can be parametrized or anything that could speed this code? Is there a way to read any amount of available data without size constraint?
Note: This is just à MCVE, in my original code, the device is used is situations where it sends tones of data, a read buffer of size 4 is not enough (we end up loosing data, likelly because the PC does not read fast enough... a buffer greater than 4 (actually of 512) makes the PC read fast enough....but then it introduces an unacceptable latency to the {0x00 0xFF 0x01 0x03}
command. I'm trying to have both features work fast enough.
1 Answer
Reset to default 0I cannot reproduce this. Assuming that you expect "pong" to be received in response to "ping", I've modified the main
loop to wait 500ms between each ping
. This prevents a situation where multiple pongs are received ahead of time.
Even in this scenario, the code would behave exactly as I think you'd expect:
Live On Coliru
#include <boost/asio.hpp>
#include <deque>
#include <iostream>
namespace asio = boost::asio;
using namespace std::chrono_literals;
using namespace std::placeholders;
using Clock = std::chrono::steady_clock;
static constexpr auto now = Clock::now;
class ReadHelper {
using error_code = boost::system::error_code;
asio::serial_port& port;
size_t bufferSize;
char* read_msg_;
std::deque<char> m_received;
std::mutex m_mutex;
std::condition_variable m_var;
public:
ReadHelper(asio::serial_port& port, size_t bufferSize)
: port(port), bufferSize(bufferSize), read_msg_(new char[bufferSize]) {}
~ReadHelper() { delete[] read_msg_; }
Clock::time_point m_started = now();
inline Clock::duration elapsed() { return now() - m_started; }
Clock::duration wait_for_response() {
std::unique_lock lock(m_mutex);
m_var.wait(lock, [this] { return m_received.size() >= 4; });
if (m_received[0] == '\x00' && //
m_received[1] == '\xFF' && //
m_received[2] == '\x01' && //
m_received[3] == '\x03') //
{
auto elapsed_ = elapsed();
std::cout << "==> Received ping response after " << elapsed_ << ", took " << (elapsed_ / 1ms)
<< " ms <==" << std::endl;
m_received.erase(m_received.begin(), m_received.begin() + 4);
return elapsed_;
}
return {};
}
void read_complete(error_code const& error, size_t bytes_transferred) {
if (!error) {
// read completed, so process the data
if (bytes_transferred != 0) {
std::cout << "Read completed for " << bytes_transferred << " byte(s) after "
<< elapsed() / 1ms << "ms" << std::endl;
std::scoped_lock lock(m_mutex);
m_received.insert(m_received.end(), read_msg_, read_msg_ + bytes_transferred);
m_var.notify_all();
}
read_loop(); // start waiting for another asynchronous read again
}
}
void read_loop() {
port.async_read_some(asio::buffer(read_msg_, bufferSize),
bind(&ReadHelper::read_complete, this, _1, _2));
}
};
int main(int argc, char** argv) try {
std::string const portName = argc > 1 ? argv[1] : "/dev/ttyUSB0";
// run the IO service as a separate thread, so the main thread can block on
// standard input
asio::thread_pool io{1};
using SP = asio::serial_port;
SP port(io, portName);
//port.set_option(SP::baud_rate(256000)); // for debug demo
port.set_option(SP::parity(SP::parity::none));
port.set_option(SP::stop_bits(SP::stop_bits::one));
port.set_option(SP::character_size(8));
port.set_option(SP::flow_control(SP::flow_control::hardware));
ReadHelper helper(port, argc > 2 ? std::stoul(argv[2]) : 1);
helper.read_loop();
static constexpr std::array cmd{'\x00', '\xFF', '\x01', '\x02'};
Clock::duration total_time{};
constexpr size_t count = 5;
for (size_t index = 0; index != count; ++index) {
std::this_thread::sleep_for(500ms);
std::cout << std::endl << "Sending new ping command..." << std::endl;
helper.m_started = now();
auto sent = write(port, asio::buffer(cmd));
if (sent == cmd.size()) {
std::cout << "Done sending after " << helper.elapsed() / 1ms << "ms, waiting for response"
<< std::endl;
total_time += helper.wait_for_response();
} else {
return 2;
}
}
std::cout << std::endl << "Average speed is " << (total_time / 1.ms / count) << " ms" << std::endl;
port.close();
io.join();
} catch (boost::system::system_error const& se) {
std::cerr << "Exception: " << se.code().message() << std::endl;
if (se.code().has_location()) {
auto sl = se.code().location();
std::cerr << " - from: " << sl.function_name() << std::endl;
std::cerr << " - at: " << sl.file_name() << ":" << sl.line() << std::endl;
}
return 1;
}
Now using socat
to emulate a serial port:
$ socat -d -d pty,raw,echo=0 pty,raw,echo=0 [394/394]
2025/03/12 21:50:36 socat[1911535] N PTY is /dev/pts/3
2025/03/12 21:50:36 socat[1911535] N PTY is /dev/pts/4
2025/03/12 21:50:36 socat[1911535] N starting data transfer loop with FDs [5,5] and [7,7]
And a simplistic client that reads "pings" and responds with "pongs" as soon as they arrive:
stdbuf -i 0 -o 0 xxd -c 4 < /dev/pts/4 |
while read ping; do
echo "ping received $ping"
printf '\0\xff\x01\x03' >> /dev/pts/4
done | nl
We test the behaviour like this:
for bufSize in 1 4 5
do
sleep 1
./build/sotest /dev/pts/3 $bufSize |& tee buf$bufSize
done
This looks like this, interactively:
And the different buf*
output files compare like this:
All in all, I'm going to have to assume that something about the device response is different. Perhaps there is no hardware flow control available?
Side Notes
Note that the above makes many improvements to the code, mainly leveraging more modern C++ standard library features. I don't think any of them are relevant. However, I do think that the design of ReadHelper
could be heavily simplified. If you can point at the documentation for the wire-protocol I could show you examples of how to achieve that while streaming other (large-volume) data at the same time.
本文标签: cWhy buffer size of boostasioserialportasyncreadsome affects function speedStack Overflow
版权声明:本文标题:c++ - Why buffer size of boost::asio::serial_port::async_read_some affects function speed? - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744743689a2622762.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论