The Effect of Chunk Retrieval Sequence in RAG on Multi-Step Inference Performance of Large Language Models