I already have a working pipeline that spots ripe blackberries with a YOLOv5 model (custom-trained, weights are ready) and feeds their pixel coordinates to my S0101 robotic arm. What I’m missing is the last, crucial step: turning those detections into a reliable 3-D trajectory so the gripper can move in, grasp, and pick without bumping the plant or losing berries. Everything runs on a custom-built control system driven from Python, so the task is to slot a path-planning layer into code that is largely finished. I can provide the camera–to–world calibration, arm kinematics, and the current scripts that publish joint targets; they just follow a straight-line approach that fails whenever branches get in the way. What I need from you • A Python module (or set of functions) that takes XYZ targets from my vision script and returns collision-free joint trajectories for the S0101 arm. • Clean integration with the existing classes and message format I’ll share, so I can call a single method, e.g., plan_and_execute(target_pose). • Short demonstration video or simulation capture showing the arm picking at least three berries in sequence, plus the source code and any config files. Acceptance criteria 1. Path executes end-to-end on my rig with no branch collisions at nominal picking speed. 2. Berry is grasped within ±5 mm of the visual target. 3. All code is commented and stays inside the current Python environment (no extra languages). If you’ve integrated planners like RRT*, OMPL, or MoveIt-style algorithms into custom controllers before, this should feel familiar—though I’m open to simpler heuristics if they hit the marks above. I’ll be available to test iterations quickly, so we can converge fast.