In this symposium we look back at some of the results of the REINS project on responsible intelligent systems and look forward at new routes of investigation for responsible AI in general. The starting point of the REINS project was to develop logic-based modelling techniques that enable responsibility checking of artificial agents. As the project progressed, that goal evolved into a more general objective to understand the concepts involved in the modelling of responsibility, and to find the formal models that characterize them. Formalization has been the main focus throughout the project, paving the way for a more precise and operationalizable understanding of the concepts involved in responsibility, machine ethics and deontic reasoning. We believe the symbolic methods that we use in our model driven approach are essential to the solution of the problem of coming to responsible AI, since responsibility is too abstract and precarious a notion to be learned through data-driven approaches. However, the interplay between model-driven and data-driven approaches is one of the new future directions we are interested in. For this final symposium of the project, we invite philosophers, logicians and specialists from concrete application areas of responsible intelligent systems (the military, the police).