leonardlin 's Collections reasoning
updated
Can Large Language Models Understand Context?
Paper
• 2402.00858
• Published
• 24
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper
• 2401.17464
• Published
• 21
ReFT: Reasoning with Reinforced Fine-Tuning
Paper
• 2401.08967
• Published
• 31
The Impact of Reasoning Step Length on Large Language Models
Paper
• 2401.04925
• Published
• 18
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table
Understanding
Paper
• 2401.04398
• Published
• 25
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper
• 2402.03620
• Published
• 117
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Paper
• 2311.04892
• Published
• 1
More Agents Is All You Need
Paper
• 2402.05120
• Published
• 57
Grandmaster-Level Chess Without Search
Paper
• 2402.04494
• Published
• 69
The Benefits of a Concise Chain of Thought on Problem-Solving in Large
Language Models
Paper
• 2401.05618
• Published
• 1
Divide-or-Conquer? Which Part Should You Distill Your LLM?
Paper
• 2402.15000
• Published
• 23
System 2 Attention (is something you might need too)
Paper
• 2311.11829
• Published
• 43
Quiet-STaR: Language Models Can Teach Themselves to Think Before
Speaking
Paper
• 2403.09629
• Published
• 79
On the Conversational Persuasiveness of Large Language Models: A
Randomized Controlled Trial
Paper
• 2403.14380
• Published
• 1
Orca-Math: Unlocking the potential of SLMs in Grade School Math
Paper
• 2402.14830
• Published
• 24
Language Models as Compilers: Simulating Pseudocode Execution Improves
Algorithmic Reasoning in Language Models
Paper
• 2404.02575
• Published
• 50
Compression Represents Intelligence Linearly
Paper
• 2404.09937
• Published
• 28
Democratizing Reasoning Ability: Tailored Learning from Large Language
Model
Paper
• 2310.13332
• Published
• 16
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale
Synthetic Data
Paper
• 2405.14333
• Published
• 44
Large Language Models as Planning Domain Generators
Paper
• 2405.06650
• Published
• 13
ALPINE: Unveiling the Planning Capability of Autoregressive Learning in
Language Models
Paper
• 2405.09220
• Published
• 27
On the Brittle Foundations of ReAct Prompting for Agentic Large Language
Models
Paper
• 2405.13966
• Published
• 2
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to
the Edge of Generalization
Paper
• 2405.15071
• Published
• 42
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo
Tree Self-refine with LLaMa-3 8B
Paper
• 2406.07394
• Published
• 29
Mixture-of-Agents Enhances Large Language Model Capabilities
Paper
• 2406.04692
• Published
• 59
Your Context Is Not an Array: Unveiling Random Access Limitations in
Transformers
Paper
• 2408.05506
• Published
• 9
Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers
Paper
• 2408.06195
• Published
• 73