-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPS waypoint navigation demo #1631
Comments
@rotu is this something you guys at Rover would be interested in working on? It would be both a good advertisement for your platform and show using navigation2 and robot localization to accomplish some outdoor GPS guided task (without or with a pre-generated map). At that point write up a little Nav2 website page about it / tutorial? I have alot of toys laying around, but this is the antithesis of the toys I have (all robots differential indoor, no GPS, but tons of lasers, but no map required...) |
Some additional notes:
|
I am interested in this. Preferably without SLAM initially. I do not have hardware on my hand for now(GPS or LIDARS). I will have in future though. I am determined to dive deep in this however I had better have someone that can collaborate with me, @SteveMacenski maybe you could summarize the high level architecture of classes/packages to realize a suitable integration of this with |
For completeness from a post on the Slack, the 3 demos I had in mind are as follows:
What might be your timing on hardware? This task is 3 fold: creating something that works doing this, documenting setup and process to enable it, and demonstrations that its working. It should be possible given the state of the stack now to do it, its mostly about documenting it so users have it as a 'feature' and showing it work in the real world in those 3 primary situations (though you personally aren't required to do all of them, I'd just like those 3 to be documented at some point) The only package you need to use for this is navigation2 and robot_localization |
All 3 demos are on my list , though like I said, I had like to move slow and clear . Let's focus on the first demo for now.
Say 2-3 months from now. I am quite confident that if we make things work within simulation, it will take little time to test on real hardware(I will have access to a ready to roll outdoor robot). About making demo public(videos etc.), I will confirm with my affiliation but I guess it will be possible.
Do we have any diagram/scheme of components that are going to be involved to do first demo ?
Almost , but I think at the moment we need the above process, but I agree there isn't much lacking for After we have the demos on hand , documenting isn't that hard, Though documenting it to where ? |
Got it - yes documenting on navigation.ros.org If you were running SLAM, you'd have SLAM publishing a odom->map transform that nav uses to position. If you had localization with AMCL, the same thing. Basically, all you need to do is replace the TF publisher for that transform from a "lidar thing" to a "gps thing". That "gps thing" is robot localization. It has something called the navsat transform that takes in IMU / GPS data and provides a map-framed pose. Then with an instance of the robot_loclaization EKF, we smooth that and publish the appropriate transform. See 'dual navsat ekf' config files in robot_localization for examples of that (1 EKF for odometry, 1 EKF for GPS smoothing, 1 NavSat for processing GPS fixes). Poof, things should work. Then the question on costmap_2d is around how to size it. Typically, lidar slam will give you a static map which sets the size for the map for use in costmap_2d. Now that you don't have a static map setting that size, you have to use the width / height / rolling parameters (depending on which demo we're doing at the time) to set the size of space for the robot to operate in / roll around. Then the rest of things work exactly the same, the obstacle/voxel layer will process obstacles in the scene into the costmap for use in planning / control to get you to your goals. The inflation layer inflates. Just now there's no static layer for the case of no-SLAM so you're having to navigate based solely on the measurements of the environment. |
I configured the my Rough thoughts on costmaps are; For the local_costmap, something like (40,20,0.2) respectively for should this local_costmap be at |
Well, you don't want it to be something hilariously large because then you might have to update that hilariously large thing or have a massive strain on your memory. Since the static map essentially sizes you costmap for your application, this is now a designer parameter for you to set it based on your application's needs. If, for instance, your application has waypoints that are 10 cm from each other, then your costmap really doesn't need to be very much larger than your local costmap, just to be able to get around obstacles in the way. Maybe 10x10 meters, but that's just a guess. The aims of your local costmap haven't changed, so I'd keep that around the same size / resolution as your typically would for a comparable SLAM application. Local costmap should be in odom. The real question here is if the global costmap should be in map or odom frame and whether it should be rolling. It all depends on your waypoint distance and needs. Can you highlight for us what you're actually trying to do that falls in line with this demo? See above for my 3 canonical examples, do any of those describe your aim (and if not, in those terms, what is your aim)? |
I have started from zero to integrate and adapt
my current thought is that better be at map, as GPS wouldn't accumulate slips/shifts from wheel causing odom to shift over time.
The task I am trying to do here is make the robot follow a given set of GPS waypoints in a outdoor field. The robot is aimed for farming tasks such as plant treatment with UV light or grassing moving etc.. the routes will be given in forms of GPS waypoints,and will be drawn by a human. Edit; I did an initial waypoint test, though it isn't GPS waypoint yet. Just set the waypoints with usual RVIZ plugin to check response. A few other things; In the test local costmap ; for obstacles and local paths I rely to meaningful sizes of local costmap. Now they are |
Any questions there? Its good to hear your process but just making sure you didn't expect some specific response :-) What's wrong with map frame? It can be a little jumping, that's fine. 1 meter distance between pts is also pretty dense. It's just not dense enough that you could send the waypoints as the path to the controller directly so that there wouldn't largely be a need for a path planner. |
In simulation environment the GPS waypoint following is now working(with pure GPS points ATM there is planning between waypoints. Waypoints are no less than 1m. However In the first use case it mentions;
It’s So dense that it won’t require a global planning hence directly feeding the waypoints to controller. So maybe what I have done so far falls into second use-case that you mentioned above? As there is planning in between. If that’s the case I could create denser GPS waypoints and call upon |
I updated some of the descriptions, I think that I didn't add enough / correct information. 100% my bad. I agree, demo 1 needs to be better defined in intent. I updated 2 and 3 to be better defined. 2 was to mean that it is a reasonable sized environment, so we can have a SLAM map augmenting GPS (demo 2 is tl;dr GPS + SLAM or GPS + map). 3 was to mean its so large, we can't possible map it nor can we have a fixed-sized costmap since its so large (demo 3 is tl;dr GPS + rolling costmaps). So I think to close the loop on all the options, demo 1 would be GPS + fixed-sized costmap and no SLAM (tl;dr GPS + fixed sized costmap: the most basic non-SLAM, non-large, known-sized space demo). I don't know what I was trying to get across with the density of points. I think separately of these 3 situations there's a question of waypoint density. By that I mean
As you can tell, those concepts of waypoint density and the situations involving rolling costmap / SLAM in conjunction of GPS are totally decoupled and unrelated. Lets just ignore the density discussion for right now. The "dense" situation is also where we want to follow a dense waypoint defined route, not waypoint following (stopping at each). That's an entirely different task that is probably actually a new planner plugin. I added that separate topic of a GPS path following to the algorithms ticket #1710. This is separate of this ticket which is just GPS waypoint following. Sorry for conflating those concepts. I dont think that was well flushed out in my mind until now. |
No worries, at the current progress, the definition of 3 these demos hasn't effected any efforts. The demos involving SLAM will be later on my side, at the moment I aiming navigation into the wild, it doesn't help to do SLAM because of the size and characteristic of area that I would like to cover. But there will be SLAM on the upcoming stages, and that is after I have GPS waypoint following set up reliably. Right now it works as best effort. If we leave the density of GPS points aside,
|
Your odom probably only has IMU / odometry and the odom frame per REP-105 should be smooth and continuous. The map frame transform does not require for that and you're integrating GPS measurements. If the GPS model has added noise, then you would see that happen. |
To update here with progress... #[lat, long, alt]
gps_waypoint0: [-2.263097140589225e-08, -3.362884290260419e-05, 0.6342230932787061]
gps_waypoint1: [-5.88445328989623e-08, -8.854168442669975e-05, 0.634228971786797]
gps_waypoint2: [-8.401593503376237e-08, -0.0001260507704191531, 0.6342366030439734]
gps_waypoint3: [-1.5119029095705861e-05, -0.00017085240798748728, 0.6343228798359632]
gps_waypoint4: [-5.5533304225340664e-05, -0.00022202050093127968, 0.6342789707705379]
gps_waypoint5: [-0.00011934284522280326, -0.00024864146666783363, 0.634503205306828] then the node converts them to I did same demo on another Gazebo outdoor world where the ground is uneven, it does perform OK too but its painfully slow, real time factor of Gazebo goes low because of too large area and no dedicated computer for that ATM. |
The costmap we see is clearly the local rolling costmap (since the path planner goes off of it) - just verifying that when you do visualize the global costmap it is also rolling since not visualized.
I think this meets all the core needs for Demo 3 - GPS only, unsized environment, planning between waypoints. Ideally for the formal documentation we could use a more complex environment if simulated (like a campus simulation area or add some buildings or something) but this is the core demo. I agree this is sufficient to document / explain configuration file changes / any mods or new packages into Nav2 to handle GPS even without hardware. We can use a simulated experiment to show it as a placeholder for a real hardware robot (or just keep the simulation if the environment is demonstrative enough). For the 'show time" screen capture, it would be good to have the global costmap showing so that the users can see it rolling visually. I think this is good to go to start writing up the documentation! Awesome job :-) An aside, are you using an open sourced controller plugin? I see it goes back a little, I know TEB does that but just a little curious. Edit: I shared this video on the slack in the cool things to share channel. I tried to tag you but it looks like you're not on there under your name or github ID that I could find. |
Actually both costmaps are rollling enabled. I set the
This node does not duplicate but before that it does GPS coordinates -> map coordinates conversion.
Yes that would be better to show it in an environment that has more visuals, I will take care of that and come up with new video.
I will change the RVIZ settings and make sure global costmap is clearly visible. Yes this is TEB controller for dealing with ackermann kinematics. I guess DWB inst suited for this kind of robot. |
I'm not concerned if the robot model is open source. That doesn't impact the GPS following settings changes so I think you're good to go there. Other people can set it up with their respective robots. No need for TB3. Ah ok, https://join.slack.com/t/navigation2/shared_invite/zt-hu52lnnq-cKYjuhTY~sEMbZXL8p9tOw link to join. DWB isn't currently setup to do ackermann, but it could be. Really the only thing missing is an |
Here is a better video where robot pays a visit to gas station following a set of GPS waypoints. From time to time the RVIZ lags to update the sensory data but the code works as expected and robot executes all waypoints. I had to switch off some of recovery servers(e.g
Currently the control is not on the forefront of development for the project but it might be when we achieve some level of accuracy and want improve behaviors of robot when taking sharp turns or decrease that back-forth movement when it adjusts the orientation. Is there any predicted potential advantage of that over TEB, strictly speaking for ackermann ? |
Hello, |
HI @davidgrenner , |
Hopefully all the code (minus your custom robot) should live somewhere in a nav or related repository. On the environment / waypoint coordinates: tutorials is probably the place. For the actual waypoint following logic, I'd like that to be in the main repo,
Adding a dependency for 1 package on RL is totally OK.
Its a more tunable cost function, you can add new critics or weight them as you like in order to get desired behavior. TEB works when it works, but there's very little introspect-ability or tuning for specific behaviors. DWB lets you create, remove, or add arbitrary critics to have the behaviors you're looking for. If TEB works fine for your needs, no need to reinvent the wheel so stick with it, but if you need to tune it to get specific behavior you want, you may find it very hard to tune or impossible to get specific behaviors you may want. Its application and developer specific. Thanks for volunteering @davidgrenner ! Like @jediofgever said, we'll be ready for that soon and we'd love to have a hardware video too (maybe also help document one of the other demos!) |
@SteveMacenski thanks for the details on DWB critics. For now TEB works quite well for me (even better than I wished for), so I might revisit controller part on a later time. There are two PRs open now(referenced just above this comment), first introduces GPS waypoint follower |
Hello @SteveMacenski, @jediofgever I have a few questions, could you please clarify? Thanks a lot!
Can a vector/array be included here to encompass multiple sources? The keyword for camera?
|
Hi @KSorte , |
@jediofgever, |
Imminently in progress for merging! |
Using RL nav sat transform to create an outdoor GPS waypoint following demo, both with and without fusing in some other mapping methods (v-slam, lidar slam, localizers, etc).
The case of with SLAM is for increased accuracy and having a map to for a planner to work in if the waypoints are too sparse in a complex environment for the local planner to reliably navigate. If you're in a maze and the waypoints are only defining the end goal, you will probably need a planner to route through the maze to follow the GPS.
If you're in an open space, the waypoints are dense, or there's a straightline view from each waypoint, then a planner is largely unnecessary and SLAM wouldn't be much additional help, beyond additional robustness in positioning to GPS drift.
For regular interval waypoints in potentially massive (1kmx1km) dynamic environments where its not realistic to map, have the global costmap be rolling with size sufficient to capture the current and next goals to plan within to get around large obstacles that the controller may not be able to reliably navigate around. The rolling costmap for global then allows you not to have to map this massive space or have a full 1km x 1km costmap in memory.
Rel. links:
https://docs.swiftnav.com/wiki/ROS_Integration_Guide
https://answers.ros.org/question/218137/using-robot_localization-with-amcl/
The text was updated successfully, but these errors were encountered: