Hello I'm

Sawera Yaseen

I am an Erasmus Mundus Scholar in Intelligent Field Robotic Systems (IFRoS). This joint master’s program is offered by Universitat de Girona, Spain, and the University of Zagreb, Croatia. Currently, I am working on my Master’s thesis at the Smart Mechatronics and Robotics (SMART) research group at Saxion University of Applied Sciences. My research focuses on the detection of invisible obstacles for drone obstacle avoidance. I am passionate about robot perception and autonomous navigation, with a strong interest in applying robotics and AI to solve real-world challenges. My goal is to contribute to cutting-edge advancements in robotics, particularly in intelligent autonomous systems.

Education


2023 – Present

Erasmus Mundus Joint Master's in Intelligent Field Robotic Systems

  • Semester 1 & 2 – University of Girona: Autonomous Systems, Multiview Geometry, Machine Learning, Probabilistic Robotics, Robot Manipulation, Hands-on Planning, Hands-on Perception, Hands-on Localization, Hands-on Intervention
  • Semester 3 – University of Zagreb (Specialization in Aerial Robotics and Multi-Robot Systems): Aerial Robotics, Multi-Robot Systems, Deep Learning, Robotic Sensing Perception and Actuation, Human-Robot Interaction, Ethics and Technology

2018 – 2022

Bachelor's in Mechatronics Engineering (CGPA: 3.95/4.00)

Mehran University of Engineering and Technology, Jamshoro, Pakistan

Thesis: Deep Learning Based Smart Spraying System for Disease Detection of Vegetables

Projects


Swarm Control

Controlling a Swarm of Crazyflies using Reynolds Rules and Consensus Protocol

This project focused on controlling a swarm of Crazyflies using Reynolds Rules and Consensus Protocol. A centralized swarm control approach was implemented in ROS2 and the Gazebo simulator, enabling simulated and real-world swarm behaviors. Reynolds Rules were applied for flocking. The Consensus-Based Rendezvous Protocol is used for agent convergence, while consensus-based formation control for geometric configurations. Different communication topologies were analyzed to study their impact on swarm dynamics. The algorithms were validated on real Crazyflies, demonstrating their effectiveness in coordinated multi-agent aerial robotics.
Pedestrian Dead Reckoning

Pedestrian Dead Reckoning (PDR)

This project implements a Pedestrian Dead Reckoning (PDR) system using IMU sensor data from a smartphone to estimate a person’s movement without relying on external references like GPS. The system detects steps from accelerometer data, estimates stride length using an empirical model, and determines heading by fusing gyroscope and magnetometer data with a complementary filter. The estimated trajectory is reconstructed based on stride length and heading over time.
Pedestrian Dead Reckoning

Human Detection and Tracking

This project implements a real-time human detection and tracking system using YOLOv9 for accurate object detection and DeepSORT for multi-object tracking. The system assigns unique IDs to individuals, maintaining identity consistency even in challenging scenarios with occlusions and motion variations. By leveraging Kalman filtering and appearance-based association, it ensures robust tracking in dynamic environments. he system was tested under diverse conditions, addressing challenges such as occlusions, identity switches, and tracking interruptions, demonstrating its effectiveness in real-world scenarios.
Stereo Visual Odometry

Stereo Visual Odometry Using the KITTI Dataset

This project focused on implementing a stereo visual odometry (VO) pipeline in Python, leveraging the KITTI dataset. The pipeline processes stereo image pairs by detecting keypoints with SIFT, matching features using BFMatcher, and triangulation for 3D point estimation. The camera's motion in 3D space was determined by solving the Perspective-n-Point (PnP) problem, with RANSAC employed to handle outliers and ensure robust pose estimation. The system's performance was evaluated using the EVO toolbox, which revealed accurate local motion estimation (low Relative Pose Error) but highlighted challenges like cumulative drift over longer trajectories (higher Absolute Pose Error).
Deep Learning Lab Work

Deep Learning Lab Work

As part of the Deep Learning course at the University of Zagreb, I worked on several PyTorch implementations covering fundamental and advanced concepts. This included implementing gradient descent, multinomial logistic regression, and fully connected networks (FCNs) on the MNIST dataset. Additionally, I developed convolutional neural networks (CNNs) for image classification on MNIST and CIFAR datasets, recurrent neural networks (RNNs) for sentiment analysis using the Stanford Sentiment Treebank dataset, and metric embeddings on MNIST.
Aerial Robotics Lab Work

Aerial Robotics Lab Work

The lab work in aerial robotics focused on the design and implementation of quadcopter control strategies. This included developing attitude control of a quadcopter, implementing cascade control for a single quadcopter axis in MATLAB, and extending it to horizontal cascade control of a quadrotor in the Gazebo simulator. Additionally, the control algorithms were tested on a real DJI Tello quadrotor to validate their performance in real-world conditions.

Frontier Based Exploration

Developed an autonomous exploration system for a Kobuki Turtlebot, focusing on enabling the robot to navigate and map unknown environments autonomously. The system integrated frontier-based exploration with the RRT-Connect path planning algorithm for efficient area coverage. A 2D LiDAR sensor was used for dynamic mapping, continuously updating the environment map as the robot explored. To ensure safe navigation, we employed the Dynamic Window Approach (DWA) for real-time obstacle avoidance. The dual-tree RRT-Connect enabled the robot to navigate complex environments with enhanced path planning and collision-free navigation.

EKF based SLAM using ArUco Markers

Developed a Simultaneous Localization and Mapping (SLAM) system for the Kobuki Turtlebot, leveraging a feature-based Extended Kalman Filter (EKF) with ArUco marker range observations. The SLAM system integrated data from wheel encoders, an Inertial Measurement Unit (IMU), and a RealSense camera to enhance localization accuracy. We used dead-reckoning for initial pose estimation, IMU data for orientation correction, and ArUco markers as landmarks for precise mapping and navigation. This resulted in robust, real-time navigation and accurate mapping, significantly improving the robot's position and orientation estimates in dynamic environments.

Kinematic Control System for a Mobile Manipulator

This project involved creating a kinematic control system for a mobile manipulator, integrating a uFactory uArm SwiftPro robotic arm with a Kobuki Turtlebot base. The system autonomously navigated and performed precise pick-and-place operations, requiring advanced techniques like solving forward and inverse kinematics to compute joint and base velocities. We implemented Task-Priority and Damped Least Squares (DLS) methods to optimize the robot’s multi-tasking capabilities. Utilizing ROS and the Stonefish simulator, we developed a robust software architecture for navigation, manipulation, and perception, showcasing an advanced approach to integrating mobile platforms with robotic arms for complex tasks.

Real-Time 2D Pose Estimation for Automated Parts-Picking

Developed a real-time 2D pose estimation system for automated parts-picking operations. Using computer vision techniques like camera calibration, Canny edge detection, contour detection, and Principal Component Analysis (PCA), the system accurately determined the position and orientation of objects within the robot's workspace. This information was transformed into the robot's coordinate system, enabling precise and efficient pick-and-place operations. This project demonstrated the practical application of vision-based calibration and object detection in industrial automation, highlighting the potential for enhancing robotic manipulation accuracy and efficiency.
TurtleBot Path Planning

TurtleBot Online Path Planning

This project implements a real-time path-planning system using the RRT algorithm with path smoothing for a TurtleBot in a 2D environment. The planner dynamically generates collision-free paths while a proportional controller ensures smooth navigation by adjusting linear and angular velocities.
Behavior Trees

Autonomous Object Handling with Behavior Trees in TurtleBot

This project focuses on implementing an autonomous object pick-up and drop-off system for a TurtleBot using Behavior Trees and ROS Noetic. The system utilizes PyTrees, an online path planner, and a motion controller to enable the robot to explore an environment, detect objects, pick them up, and place them at designated locations.
Task-Priority Kinematic Control

Task-Priority Kinematic Control

The labs of the Intervention course focused on Task-Priority Kinematic Control for mobile manipulators, involving kinematic modeling, the Recursive Task-Priority algorithm, and Weighted Damped Least Squares (DLS). It emphasized multi-task execution, constraint handling, and trajectory accuracy using different kinematic integration methods. These projects enhanced my understanding of inverse kinematics, null-space projection, and motion planning for complex robotic systems.
Event-based Cameras

Event-based Cameras

As part of a hands-on perception lab, I worked with event-based cameras (EBCs), focusing on their advantages for high-speed, low-latency vision, motion compensation, and encoding techniques. Using the MVSEC dataset, I analyzed event data to assess image quality through visual inspection and quantitative metrics. A frame-based approach was applied to enhance the visibility of motion patterns, and the impact of camera velocity was explored with a timestamp-based encoding method for improved temporal representation. Additionally, I developed a motion compensation algorithm to enhance event image quality through optical flow prediction. The algorithm's convergence across various time sequences was evaluated, and the Image of Warped Events (IWE) was optimized.
ArUco Marker Applications

Computer-vision applications using ArUco marker

As part of a hands-on perception lab, I implemented ArUco marker-based computer vision applications using OpenCV. This included generating and detecting markers, calibrating a camera using an ArUco board, and performing real-time pose estimation for augmented reality. By utilizing OpenCV’s marker detection, camera calibration, and solvePnP functions, I developed a system that accurately tracks markers and overlays virtual objects in a real-world environment.
EKF SLAM

Extended Kalman Filter (FEKF) for SLAM

This project implements Feature-Based Extended Kalman Filter (FEKF) Localization for a 3-DoF Differential Drive Mobile Robot using ROS and Python. The system integrates odometry and compass readings with a map-based EKF approach, where known environmental features aid localization. The implementation includes dead reckoning localization, data association techniques, and state estimation using Gaussian filters. Two motion models—Input Displacement and Constant Velocity—were explored to evaluate EKF performance.
UR3e Palletizing

Palletizing operation with UR3e Collaborative Robot

As part of robot manipulation course, I worked with the UR3e collaborative robot to perform a depalletizing task using the PolyScope programming console. The project involved configuring the robotic arm, programming its movements, and implementing force control for accurate part stacking. Key tasks included setting robot positions in freedrive mode, detecting part presence using a vacuum-based end-effector, and integrating light indicators for process monitoring.
Staubli TS-60

Classification and Pick-and-Place with the Staubli TS-60

As part of a robotics lab, I programmed the Staubli TS-60 robot to perform a pick-and-place task with automated classification of metal and non-metal cylinders. Using sensors and actuators, I implemented a classification station to detect materials and sort them accordingly. The robot, equipped with a pneumatic suction-based end-effector, was programmed using SRS software and VAL3 scripting to execute precise movements for object manipulation.
Staubli TX-60

Assembly and Pick-and-Place with the Staubli TX-60

In a robotics lab, I programmed the Staubli TX-60 robotic arm to perform an automated cylinder-piston assembly and pick-and-place task. Using SRS software and VAL3 scripting, presence sensors were integrated for object detection, a pneumatic gripper was controlled for precise handling, and adaptive movements were implemented for efficient assembly. The robot dynamically identified piston availability and adjusted its operations accordingly, enhancing reliability in an industrial setting.
Reinforcement Learning Path Planning

Reinforcement Learning-Based Path Planning

This project focused on implementing a Reinforcement Learning (RL) approach for path planning in autonomous robots operating within a static environment. The Q-learning algorithm was applied to a point (omnidirectional) robot to enable efficient navigation and obstacle avoidance. Through a trial-and-error process, the robot learned to identify optimal paths, gradually improving its ability to reach target locations while avoiding obstacles.
Reinforcement Learning Path Planning

Deep Learning Based Smart Spraying System for Disease Control of Vegetables (Bachelor's Final Year Project)

For my Bachelor's final year project, developed a Deep learning-based smart spraying system that uses deep learning and embedded systems to detect and treat plant diseases in vegetable crops. By integrating a YOLOv5-based AI model with Raspberry Pi, Arduino, and solenoid valves, the system identifies diseased plants and applies pesticides precisely, reducing chemical waste and environmental impact. The autonomous mobile platform, powered by DC motors and solar energy, enhances efficiency in agriculture.