logo

SCIENCE CHINA Information Sciences, Volume 61, Issue 5: 050102(2018) https://doi.org/10.1007/s11432-017-9380-5

RoboCloud: augmenting robotic visions for open environment modeling using Internet knowledge

More info
  • ReceivedNov 16, 2017
  • AcceptedMar 13, 2018
  • PublishedApr 18, 2018

Abstract

Modeling an open environment that contains unpredictable objects is a challenging problem in the field of robotics. In traditional approaches, when a robot encounters an unknown object, a mistake will inevitably be added to the robot's environmental model, severely constraining the robot's autonomy, and possibly leading to disastrous consequences in certain settings. The abundant knowledge accumulated on the Internet has the potential to remedy the uncertainties that result from encountering with unknown objects. However, robotic applications generally pay considerable attention to quality of service (QoS). For this reason, directly accessing the Internet, which can be unpredictable, is generally not acceptable. RoboCloud is proposed as a novel approach to environment modeling that takes advantage of the Internet without sacrificing the critical properties of QoS. RoboCloud is a “mission cloud–public cloud” layered cloud organization model in which the mission cloud provides QoS-available environment modeling capability with built-in prior knowledge while the public cloud is the existing services provided by the Internet. The “cloud phase transition” mechanism seeks help from the public cloud only when a request is outside the knowledge of the mission cloud and the QoS cost is acceptable. We have adopted semantic mapping, a typical robotic environment modeling task, to illustrate and substantiate our approach and key mechanism. Experiments using open 2D and 3D datasets with real robots have demonstrated that RoboCloud is able to augment robotic visions for open environment modeling.


References

[1] Nüchter A, Hertzberg J. Towards semantic maps for mobile robots. Robot Auton Syst, 2008, 56: 915-926 CrossRef Google Scholar

[2] Wolf D F, Sukhatme G S. Semantic mapping using mobile robots. IEEE Trans Robot, 2008, 24: 245-258 CrossRef Google Scholar

[3] Fulcher J. Computational Intelligence: An Introduction. Berlin: Springer, 2008. Google Scholar

[4] Kehoe B, Patil S, Abbeel P. A survey of research on cloud robotics and automation. IEEE Trans Autom Sci Eng, 2015, 12: 398-409 CrossRef Google Scholar

[5] Riazuelo L, Tenorth M, Marco D D. RoboEarth semantic mapping: a cloud enabled knowledge-based approach. IEEE Trans Autom Sci Eng, 2015, 12: 432-443 CrossRef Google Scholar

[6] Satyanarayanan M, Bahl P, Caceres R. The case for VM-based cloudlets in mobile computing. IEEE Pervas Comput, 2009, 8: 14-23 CrossRef Google Scholar

[7] Furrer J, Kamei K, Sharma C, et al. Unr-pf: an open-source platform for cloud networked robotic services. In: Proceedings of IEEE/SICE International Symposium on System Integration, Fukuoka, 2012. Google Scholar

[8] Kostavelis I, Gasteratos A. Semantic mapping for mobile robotics tasks: a survey. Robot Auton Syst, 2015, 66: 86-103 CrossRef Google Scholar

[9] Durrant-Whyte H, Bailey T. Simultaneous localization and mapping: part I. IEEE Robot Autom Mag, 2006, 13: 99--110. Google Scholar

[10] Mohanarajah G, Hunziker D, D'Andrea R. Rapyuta: a cloud robotics platform. IEEE Trans Autom Sci Eng, 2015, 12: 481-493 CrossRef Google Scholar

[11] Ashutosh S, Ashesh J, Ozan S, et al. Robobrain: large-scale knowledge engine for robots. 2014,. arXiv Google Scholar

[12] Qureshi B, Javed Y, Koubaa A. Performance of a low cost Hadoop cluster for image analysis in cloud robotics environment. Procedia Comput Sci, 2016, 82: 90-98 CrossRef Google Scholar

[13] Ren S Q, He K M, Girshick R. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intel, 2017, 39: 1137-1149 CrossRef PubMed Google Scholar

[14] Beksi W J, Spruth J, Papanikolopoulos N. Core: a cloud-based object recognition engine for robotics. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, 2015. Google Scholar

[15] Szegedy C, Toshev A, Erhan D. Deep neural networks for object detection. Adv Neural Inf Process Syst, 2013, 26: 2553--2561. Google Scholar

[16] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation. 2014,. arXiv Google Scholar

[17] Girshick R. Fast R-CNN. 2015,. arXiv Google Scholar

[18] Everingham M, Van Gool L, Williams C K I. The pascal visual object classes (VOC) challenge. Int J Comput Vis, 2010, 88: 303-338 CrossRef Google Scholar

[19] Torralba A, Murphy K P, Freeman W T. Shared features for multiclass object detection. In: Toward Category-Level Object Recognition. Berlin: Springer, 2006. 345--361. Google Scholar

[20] Li Y Y, Wang H M, Ding B, et al. Learning from internet: handling uncertainty in robotic environment modeling. In: Proceedings of the 9th Asia-Pacific Symposium on Internetware, Shanghai, 2017. Google Scholar

[21] Duan K B, Keerthi S. Which is the best multiclass svm method? an empirical study. In: Proceedings of International Workshop on Multiple Classifier Systems, Seaside, 2005. 278--285. Google Scholar

[22] Li Y Y, Wang H M, Ding B, et al. Toward qos-aware cloud robotic applications: a hybrid architecture and its implementation. In: Proceedings of IEEE Conferences on Ubiquitous Intelligence and Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress, Toulouse, 2017. Google Scholar

  • Figure 1

    (Color online) Recognition latency of CloudSight.

  • Figure 2

    RoboCloud semantic mapping system.

  • Figure 3

    (Color online) Experiments for Faster R-CNN-CloudSight. (a) mAP with unfamiliar objects; (b) latency with unfamiliar objects; (c) mAPs with combinations of variables.

  • Figure 4

    (Color online) Experiments for CORE-CloudSight. (a) Accuracy with unfamiliar objects; (b) latency with unfamiliar objects; (c) accuracies with combinations of variables.

  • Figure 5

    (Color online) Test environment from the TurtleBot's perspective.

  • Figure 6

    (Color online) Semantic mapping for Faster R-CNN-CloudSight. (a) Semantic map (only Faster R-CNN);protect łinebreak (b) semantic map (Faster R-CNN and CloudSight).

  • Figure 7

    (Color online) mAP considering real-time constraints.

  • Figure 8

    (Color online) Semantic mapping for CORE-CloudSight. (a) Semantic map (only CORE); (b) semantic map (CORE and CloudSight).

  •   

    Algorithm 1 2D object-level semantic information cognition

    Require:Scene image $~x~$ captured by the robot;

    Output:Class label set $~C~$ describing the objects in an image and label $~c~$ for each object.

    Use CNN to build feature map for the image;

    Use RPN to obtain bounding-boxes for objects in the image;

    Obtain features for each bounding-box $~x~$;

    for each $~x~$

    $~(c_{\rm~mission},\Psi)={\rm~Faster}~R-{\rm~CNN}(x)$;

    if $c_{\rm~mission}\ne``\text{other-objects}~{\rm~class}"$ and $\Psi\ge\Psi_{\rm~thr}$ then

    $~c=c_{\rm~mission}$;

    else

    if $t_{\rm~tol}\ge~t_{\rm~est}$ then

    if ${\rm~hasvalue}(x,t_{\rm~tol})$ then

    $c_{\rm~public}={\rm~CloudSight}(x)$;

    else

    $c_{\rm~public}=``{\rm~unknown}"$;

    end if

    $~c=c_{\rm~public}$;

    else

    $c=c_{\rm~mission}$;

    end if

    end if

    end for

Copyright 2019 Science China Press Co., Ltd. 《中国科学》杂志社有限责任公司 版权所有

京ICP备18024590号-1