Uuv_simulator: Implement a more realistic sonar sensor simulation

Created on 2 May 2017  ·  33Comments  ·  Source: uuvsimulator/uuv_simulator

We currently abuse Gazebo's laser sensors and call it sonar.

A more realistic simulation of sonar sensors (side-scan sonar and multibeam sonar) would add much value to uuv_simulator and make it more interesting for a larger new community.

enhancement help wanted

Most helpful comment

Hey, we are working towards sensor simulations in uuv_simulator also. Currently, we have something akin to a sidescan, that gives us waterfall images. More importantly, we are working towards a nice FLS sensor here: https://github.com/smarc-project/smarc_simulations/blob/master/smarc_gazebo_plugins/smarc_gazebo_ros_plugins/src/gazebo_ros_image_sonar.cpp . It builds on the sim described in the paper "A novel GPU-based sonar simulator for real-time applications". Gazebo makes it a bit hard, so it relies on abusing a simulated depth camera, but it's reasonably efficient still.

If there is still interest in this, I will supply a few examples on how the sonars look and perhaps work towards completing everything and create a PR to this repo.

All 33 comments

Hey, we are working towards sensor simulations in uuv_simulator also. Currently, we have something akin to a sidescan, that gives us waterfall images. More importantly, we are working towards a nice FLS sensor here: https://github.com/smarc-project/smarc_simulations/blob/master/smarc_gazebo_plugins/smarc_gazebo_ros_plugins/src/gazebo_ros_image_sonar.cpp . It builds on the sim described in the paper "A novel GPU-based sonar simulator for real-time applications". Gazebo makes it a bit hard, so it relies on abusing a simulated depth camera, but it's reasonably efficient still.

If there is still interest in this, I will supply a few examples on how the sonars look and perhaps work towards completing everything and create a PR to this repo.

@nilsbore some examples would be great. I'm also really interested in this.

Hi @nilsbore, all contributions are very welcome :) If you want to make a pull request, I will be happy to review it. Another solution would be to add information on the documentation of the UUV simulator how to integrate your plugins as well, so more people can use it :)

Hi @musamarcusso , and thanks for your great work on this simulator! I'll see how much it would take to create a clean PR to this repo, and follow one of the proposed paths, thanks!

Hi @nilsbore,

I'm looking forward to seeing your progress with adapting the sonar to the UUV simulator. Please let us know! =D

Was any progress made on the FLS? I'm also looking to utilize.

Hello @nilsbore! It's been a while since this thread was started, however may I know how is the progress on the FLS simulation so far? I am looking forward to utilizing the sonar simulation you are working on in Gazebo as well and hopefully you have some more progress going on after that

*after the last update that you had in here.

Hi all, I'm sorry I haven't been able to make much progress on this topic. The sensor simulation is open source and available here: https://github.com/smarc-project/smarc_simulations/tree/master/smarc_gazebo_plugins/smarc_gazebo_ros_plugins . It's called the gazebo_ros_image_sonar. Even though we're using a somewhat dated version of uuv_simulator, this plugin only depends on gazebo and so should work everywhere. In uuv_simulator, you can include it by including this urdf: https://github.com/smarc-project/smarc_simulations/blob/master/smarc_sensor_plugins/smarc_sensor_plugins_ros/urdf/sonar_snippets.xacro , and adding something like this snippet in your vehicle urdf:

<xacro:forward_looking_sonar
      namespace="${namespace}"
      suffix="down"
      parent_link="${namespace}/base_link"
      topic="forward_sonar"
      mass="0.015"
      update_rate="10"
      samples="100"
      fov="1.54719755"
      width="260"
      height="120">
      <inertia ixx="0.00001" ixy="0.0" ixz="0.0" iyy="0.00001" iyz="0.0" izz="0.00001" />
      <origin xyz="0.83 0 -0.22" rpy="0 ${0.2*pi} 0" />
      <visual>
      </visual>
    </xacro:forward_looking_sonar>

You should then be able to see something like this image on a ros topic: video.

When time allows (after deadlines), I will try to break this sensor out into its own package that can be used by anyone that wants to use the FLS sim and nothing else.

Best,
Nils

Noted @nilsbore ! Thank you so much for the update. I'll try and make improvements for this sensor as well then so that it's performance can be improved.

Really great update and progress so far! Looking forward to further updates on this project and really nice job @nilsbore!

I've cherry picked from @NickSadjoli and @nilsbore to try and get somethign FF from master. Seems like you both have diverged in different ways.

Would be nice to have you guys take a look and comment/commit so this could be used by others.

Sorry for the late reply @willcbaker ! I am diverging from @nilsbore since I am trying to implement his code into my own project which seems to have different parameters in the environment. His code is working perfectly in my environment however, albeit with some improvements that could be applied to make it more closely resemble the implementation mentioned in "A novel GPU-based sonar simulator for real-time applications" paper.

FLS upgraded-turbidwater

Unfortunately I have not made much progress in making improvements to the code due to other problems, but I'll make sure to post updates when I have time to do so.

In terms of the commit that you made I can safely say that the additions you made are exactly the same as I did for my branch. It should work perfectly with Gazebo now and you can get the sonar images from the "rexrov/depth/image_sonar" topic.

I should note that currently the name is tightly tied to the "/depth" topic since the sensor implementation requires usage of a depth camera for now. We should see whether this can be improved to avoid possible confusion and hopefully further updates can follow soon.

Hey @NickSadjoli, could you link to the repository where this sonar is implemented, if there is one?

@musamarcusso Apologies for not providing the links before. I have a working branch that has a working example of this sonar onto the Rexrov UUV, but please do note that this branch contains other not-working experimental code that I tried previously to recreate the FLS mentioned in the referred paper. The branch is linked below:

https://github.com/NickSadjoli/uuv_simulator/tree/realistic-sonar-sim-48

Other things or changes to note that was added or of use in this branch:

  • A customized test world named "test_turbid_water.world" that was heavily based on the "subsea_bop_panel.world" that I have modified for use in my research project.
  • A working rexrov model containing the FLS implementation from Niels is in the directory "uuv_descriptions/robots/rexrov_test.xacro", initially based on the "rexrov_sonar.xacro" model. The model comments out the previous m450 or p900 fls and replaces it with the fls sonar xacro that was recommended by Niels above.
  • A customized "rexrov_fls.rviz" view which makes Rviz directly displays to the /rexrov/depth/image_raw_sonar and front default_camera topics.
  • A custom launch file, "test_turbid_water.launch" is used to launch all the combination of changes above, directly referencing the custom world and Rviz

If the organization of the files in the current branch is too confusing please do give the feedback to me in this thread, so that I can make a cleaner version of this branch for you to do a pull.

Thanks and looking forward to your feedback/opinions

  • NickSadjoli

EDIT: Forgot to attach the repository link

Hi, @NickSadjoli
I am trying out your uuv_simulator-realistic-sonar-sim-48. The first error I got was "missing uuv_laser_to_sonar/launch" during catkin_make install. I created an empty launch folder as a work-around. Then I ran "roslaunch uuv_gazebo_worlds test_turbid_water.launch" and got several errors, including

  1. [Err] [gazebo_ros_image_sonar.cpp:160] We do not have clip.
  2. gzserver: symbol lookup error: /home/cchien/catkin_ws3/devel/lib/libimage_sonar_ros_plugin.so: undefined symbol: _ZN2cv3Mat6createEiPKii.
    as well as some warning including "Conversion of sensor type[depth] not supported".
    As a result, no sonar image is shown in rviz. Am I mssing something? Any comments or suggestions? I am testing your code on ubuntu 16.04, ROS kinetic, and opencv 3.4. Thanks. C. Chien

Hi, @NickSadjoli
After tracking down errors, it turns out that opencv libs were not properly linked. As a work-around, I added required opencv libs expclicitly to image_sonar_ros_plugin. Please let me know if there is better ways to fix the original error.

I also notice my_frame (and map) either is not defined or not getting tf from world. Any idea how to fix the problem? Thanks for your contribution. C. Chien

Hello @chyphen777!

Apologies that this is a very late reply to your enquiry.

The error on the missing launch file was likely caused due to me including several directories that weren't going used in the FLS simulation anyways, causing messy CMake-s. I have updated my branch to clean this up, which should fix for such errors now. However unfortunately it seems that I have accidentally deleted some necessary files in this branch which causes Gazebo GUI's to be launching with errors and segmentation faults instead. Note however that the actual gazebo topics are still launched and that RViz can still occasionally launch properly, so it's not entirely broken yet.

I'll try to fix this issue with the branch, and revert it back to the previous commit if the problem still persists. Do note that this might take a while however as I am also taking care of other things at my work, so I might not have much time.

As for the "We do not have clip" and "Conversion of sensor type[depth] not supported" errors, I'm not sure whether those are possible causes for the sonar image not showing up in RViz, as I was still able to have the image shown on my RViz even with those errors popping up. I think it is most likely attributed to the symbol lookup error which unfortunately I haven't encountered on my local repo yet.

On a related note I am not sure whether the opencv libs need to be explicitly linked to the CMakeLists as well since I was able to launch the world just fine without requiring it. I will try to see into this as well however. Thank you for providing the band-aid error as well for other users that might be experiencing a similar issue.

Hi @chyphen777,

I have just made some minor modifications to the current branch and it seems to be working fine in my machine now, so you should be able to just switch to this branch and have everything be compiled with catkin_make install.

However, it is to be noted that the branch seems to be unstable at times and you might encounter the following type of error:

current_branch_not_stable

In case you encounter such errors, you should be able to just close and re-launch the launch file to get the Gazebo and RViz up and running (at least that what I was able to do). This instability is definitely annoying and I'll try to look into what causes this further so that the branch can be more stable.

Also, I unfortunately also haven't taken a closer look with regards to the my_frame and map to not be defined or not getting updated from world's tf. I suspect it might be due to a left-over from my previous attempts of using another FLS solution that wasn't successful yet. Again, I'll try to look into this once I have more time and back to you once I have more updates.

Cheers and thanks for your feedback C. Chien! - Nicholas S.

Hi, @NickSadjoli:

Thanks for the reply, fixes and useful information. I am able to run your sonar simulator. Sorry for late reply. I was side-tracked to other projects. Please keep us posted on any update.
Regards, C. Chien.

@NickSadjoli
First off, great work with this FLS implementation for UUV simulator, it's a huge asset for the community! I've got a couple of questions on it as I plan to make some slight changes to have the FLS match the hardware I would normally deploy to the field.

  1. How can I add another sonar, for example, one looking forward and one looking aft? Looks like there is a lot of interdependence on the camera with regard to FLS image generation, looking for your guidance.

  2. What is the sonar this is based on and what is its vertical aperture? If I wanted to how would I change the sonar's vertical aperture?

  3. What is the max range of the sensor and how would I change that if I wanted?

  4. In the file rexrov_test.xacro, in the block to start up the FLS what does "samples = 100" refer to?

John

@NickSadjoli
Looking to properly cite this in some upcoming work. I will use the UUV simulator citation but are there any other works that need to be cited? Perhaps "A novel GPU-based sonar simulator for real-time applications?"

@jake3991 At least my original implementation is based on the paper you're referring to. I do not know if @NickSadjoli has added any concepts from other works.

Great job!

@nilsbore I'm trying to understand your implementation and the image formation of the imaging sonar. Could you point me to the equations that were used to implement ConstructSonarImage and ConstructScanImage ? For example, why SNR is computed as cv::Mat SNR = SL - 2.0*TL - (NL-DI) + TS;?

cv::Mat GazeboRosImageSonar::ConstructSonarImage(cv::Mat& depth, cv::Mat& normals)
{
  std::vector<cv::Mat> images(3);
  cv::split(normals, images);

  float intensity = 100.; // target strength
  float SL = 200.; // source level
  float NL = 30; // noise level
  float DI = 0.0; // directivity index

  if (dist_matrix_.empty()) {
    // Compute dist_matrix_ once
    // ...
  }

  cv::Mat TS = intensity*images[2]; // target strength, probably dir should be DI

  cv::Mat TL = 5*depth; // transmission loss
  cv::multiply(TL, dist_matrix_, TL);

  cv::Mat SNR = SL - 2.0*TL - (NL-DI) + TS;
  SNR.setTo(0., SNR < 0.);

  // ...

@witignite This code is from quite some time ago and I don't remember exactly where I got those equations. In honesty, the focus when implementing this was more on creating a realistic-looking image rather than a completely correct sonar model. A good start is probably to look at the paper that @jake3991 referenced.

For those who are interested in Multibeam Sonar in Gazebo, I've built a Multibeam sonar plugin in the Dave project which incorporates uuv_simulator too. It uses Nvidia Cuda library and calculates intensity/range data up to 10 Hz refresh rate with 900 kHz frequency, 10 m range. For more details, https://github.com/Field-Robotics-Lab/dave/wiki/Multibeam-Forward-Looking-Sonar
image

Hi, I'm so sorry for just replying to this thread after a very long time, as I'm tackling another massive part/component for my project.

This effectively also means that I'm no longer fully working on the development of the simulator as I've handed this workload over to a colleague of mine who's been handling it since last year. Unfortunately I forgot to mention this thread to him in case that there are questions posted regarding any development.

Also for clarification: during my handling of the sonar model, I did not do any lower-level (i.e. the C source codes) code wrote by @nilsbore for the sonar. However, my collague may have done some small tweaks to it to change some of its behavior to more realistically simulate FLS behaviors in turbid underwater environments.

@jake3991 To answer your questions:

  1. From what I understand, my colleague has so far only managed to 'rotate' the sonar by tweaking some URDF parameters of it. I'll try to ask him how it's done in more detail if you're still interested.
  2. Initially the sonar was still based on the Blueview sonar that @nilsbore used. However, recently my collague has figured out how to tweak the aperture to follow a different sonar specifications, specifically the Blueview M750D sonar. I believe he's managed to do it by adding some parameters into the xacro call, but again I'll need to ask him further details for that implementation.
  3. Same answer as 2
  4. Didn't manage to play around with the software to have definite answers on this unfortunately. Need to ask colleague on this one.

Please do also note that my colleague is planning to publish a paper on the uuv_simulator based simulator that he's been working on. So further details on the current implementation for our project will need to be directed to him. I'll try and have him be connected to this thread if necessary to answer any further questions on our project's implementation.

With regards to an actually "accurate" sonar model however, it seems that the work linked by @woensug-choi might be a more "accurate" model as it's using a more direct ray-traced implementation. Though I'm not completely sure myself how much better this is for simulation-based development overall.

Will have my colleague check the implementation and see how well it could be integrated to my project's simulator.

Again huge apologies to everyone on the very late reply to this thread.

@jake3991 Also for clarification regarding citation of our project's simulator: Since we haven't gotten an actual successful paper publication on the simulator, I suggest you still use the paper you linked as well as @nilsbore for the proper citation.

Once our publication on the publication is accepted or done, then citation for our paper can also perhaps be used then.

Do note that I am using https://github.com/uuvsimulator/uuv_simulator from @musamarcusso that have already integrated @nilsbore's sonar module.
To clarify @NickSadjoli on @jake3991 question:

  1. We added a URDF for the sonar itself. so by adding another URDF and remap the parameters names, you can have 2 sonars in your simulator.

2.Like from @NickSadjoli said, we were referencing from a Blueprint sonar.
To tweet the VFOV of the sonar:
In URDF xacro file of the sonar, you are required to specify the HFOV, the width and height of the image. Gazebo than calculate the Focal length base on the image width and HFOV. Using the same Focal length, they reverse calculate the VFOV using height of the image. So to specify the VFOV that you wanted, you will need to calculate the image height.

Focal length = (Width/2) / tan( deg2rad(HFOV)/2) or Focal length = (Height/2) / tan( deg2rad(VFOV)/2)

3.You are required to tweet the C++ code of the sonar. In the C++ code of the sonar by @nilsbore , I added a subscriber to the "Range" value instead of having a constant Range like @nilsbore original work. This way I am able to play with the maximum range of the sonar display like most sonar does.

  1. In the xacro of "FLS", the "samples" param not used. In the xacro for "Multibeam", it uses the laser point of gazebo to contuct the multi beam. Hence refer to this: http://gazebosim.org/tutorials?tut=ros_gzplugins under "GPU laser."

Hope this helps.

@nilsbore and @NickSadjoli , Hii, I just tried the fls plugin provided in uuv_sensor_ros_plugin. I have some questions relating to it : 1. I tried to set up the topic in sonar_snippets.xacro. When robot was simulated, why is not there the topic I set before ? I just see this kind of topic : "/rexroth/depth/sonar raw_image".

  1. In sonar_snippets.xacro, what does ${width} and ${height} mean ? does it relate to the width and height of the generated image sonar ? If it does, I tried to set up the value of ${width} and ${height} then I print the value of height and width of the sonar image topic . But those values were different. Could you please explain to me, why does it happen ?
  2. Is it possible to get the extended range of sonar, so I can see the farter obstacle in sonar image ?

@Jenanaputra Kindly refer to my comment above.....

  1. How did you set the topic? Did you change the parameters of {$topic} in a xacro file? Or did you modify the C++ code to a specific topic that you would like to have? (for me modifying the code sounds more appetising)
  2. The sonar is actually a modification of gazebo depth camera, in a camera plugin of gazebo the ${width} and ${height} affects the VFOV and HFOV of the sensor(equation mention on my previous post). Usually for camera the width is 1280 and height is 720. For sonar, I set the width as 1280 and calculate the height using the formula. This way i get the VFOV of the sensor that i wanted.
  3. As mention before on my previous post, modification of range require you to modify the C++ code. The range is currently is a fixed value hard coded.
    Finally dont forget to catkin clean and catkin make after modify the C++ code (assuming you use ROS package)

@loguna123 Thanks for your replay. Do you know how to get / to know the depth value used in this fls plugin ?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

dbcesar picture dbcesar  ·  5Comments

HashirZahir picture HashirZahir  ·  10Comments

tve picture tve  ·  17Comments

hughhugh picture hughhugh  ·  5Comments

Timple picture Timple  ·  7Comments