Model-free and model-based are two categories of methods in reinforcement learning (RL).
Model-free methods do not require the agent to have any explicit knowledge of the environment's transition dynamics (i.e., the probability of moving from one state to another). These methods learn solely from experience, observing the rewards and states resulting from actions. Common examples of model-free methods include Q-learning, SARSA, and Monte Carlo methods. These methods are often simpler to implement but may require more data to converge.
Model-based methods, on the other hand, involve learning a model of the environment, which can be used to predict state transitions and rewards. This model helps the agent plan by simulating future states and actions, making the learning process more efficient. Examples of model-based methods include Dynamic Programming and Monte Carlo Tree Search. Model-based approaches can often achieve better sample efficiency, as they leverage the learned model to make predictions and improve planning.