# Eligibility Trace An eligibility trace is a short-term memory mechanism that tracks which states or actions were recently visited within an episode. It helps solve 1. The credit assignment problem by allowing delayed rewards to update earlier states/actions that contributed to the outcome 2. Memory for tracking past states in non-Markov tasks. Eligibility traces are used in algorithms like TD(λ), SARSA(λ), and actor-critic methods with traces. They can be added to most [[Temporal Difference Learning]] algorithms to bridge the gap between Monte Carlo and one-step TD methods. Examples of non-Markov tasks include partially observable environments (where current observation doesn't reveal full state), games requiring memory of opponent patterns, or navigation tasks where knowing your path history matters for optimal decisions.