In the last 25 years optical flow has become more and more important in the field of Computer Vision. Many algorithms have been published since the first paper in 1981 and a lot of concepts have been introduced facing problems which occur when the pixel displacements between frames are determined. Even though todays algorithms explicitly consider violations of their model assumptions, they still suffer from two main issues: an imprecise estimation of motion discontinuities, which often causes flow algorithms to oversmooth these boundaries, and the inability to find correct displacements if an object in the scene is weakly textured, for example with a repetitive texture pattern. We show that the computation of optical flow can be strongly improved if additional geometric information about the respective frames is incorporated: the surface normals of the objects in the scene and the depth discontinuities. The former provides a larger feature vector in order to find more unique displacements, whereas the latter allows the estimation of a piecewise smooth flow field, which preserves discontinuities between object boundaries. Both inputs do not require a 3D representation of the scene but retrieve the information channels by illuminating the scene under different lighting conditions. This thesis explains how the surface normals and the depth discontinuities of a scene can be obtained using specific lighting conditions and how they are incorporated into an existing optical flow framework. We focus on human performances in the context of Image Based Relighting.