Skip to main navigation Skip to search Skip to main content

LUTFormer: Lookup table transformer for image enhancement

  • Jinwon Ko
  • , Keunsoo Ko
  • , Hanul Kim
  • , Chang Su Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Existing image enhancement methods based on 3D lookup tables (LUTs) often yield suboptimal results by oversimplifying image context into a single global feature and disrupting the inherent geometric structure of a LUT during regression. To address these issues, we propose LUTFormer, a novel framework that reframes LUT prediction as a query-based refinement task. LUTFormer preserves geometric integrity by initializing LUT grid points as structured query tokens, which are then progressively refined by a transformer decoder. This decoder leverages a novel progressive cross-attention mechanism to inject multi-level image context, yielding a context-aware LUT transformation. Extensive experiments on multiple benchmark datasets confirm the effectiveness and efficiency of the proposed LUTFormer. The source code is available at https://github.com/Jinwon-Ko/LUTFormer.

Original languageEnglish
Article number131863
JournalNeurocomputing
Volume660
DOIs
StatePublished - 7 Jan 2026

Keywords

  • Context-aware color transformation
  • Image enhancement
  • Lookup table
  • Vision transformer

Fingerprint

Dive into the research topics of 'LUTFormer: Lookup table transformer for image enhancement'. Together they form a unique fingerprint.

Cite this