SlideShare a Scribd company logo
1 of 56
Download to read offline
Low Complexity
Regularization of
Inverse Problems
Gabriel Peyré

Joint works with:
Samuel Vaiter Jalal Fadili
Charles Dossal
Mohammad Golbabaee

VISI
www.numerical-tours.com

N
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Single Pixel Camera (Rice)
x0
˜
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

N micro-mirrors
Single Pixel Camera (Rice)
x0
˜

y[i] = hx0 , 'i i

P measures

P/N = 1

N micro-mirrors

P/N = 0.16

P/N = 0.02
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f
2

x0 2 L
˜

array
resolution
CS hardware

x0 2 R

N

micro

mirrors

RN .

y 2 RP
CS Hardware Model
˜
CS is about designing hardware: input signals f

L2 (R2 ).

Physical hardware resolution limit: target resolution f

x0 2 L
˜

array
resolution

x0 2 R

N

micro

mirrors

y 2 RP

CS hardware

,
,
...

2

RN .

,

Operator
x0
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP
Inverse Problems
Recovering x0 RN from noisy observations
y = x 0 + w 2 RP

Examples: Inpainting, super-resolution, compressed-sensing

x0

x0
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity
Inverse Problem Regularization
Observations: y = x0 + w 2 RP .
Estimator: x(y) depends only on

observations y
parameter

Example: variational methods
1
x(y) 2 argmin ||y
x||2 + J(x)
x2RN 2
Data fidelity Regularity

Performance analysis:
! Criteria on (x0 , ||w||, ) to ensure

L2 stability ||x(y) x0 || = O(||w||)
Model stability (e.g. spikes location)
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:
T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Image

x

D

Analysis
sparsity:
Image x

Gradient D⇤ x
Union of Linear Models for Data Processing
Union of models: T 2 T linear spaces.
Synthesis
sparsity:

Structured
sparsity:

T

Coe cients x

Analysis
sparsity:

Image

x

D

Low-rank:

Image x

Gradient D⇤ x

S1,·

Multi-spectral imaging:
Pr
xi,· = j=1 Ai,j Sj,·

S2,·

x

S3,·
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

x
J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

+

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

Piecewise regular ball , Union of linear models (T )T 2T

T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x

T0

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

0

J(x) = ||x||⇤

T = low-rank
matrices
Gauges for Union of Linear Models
Gauge:

J :R

N

!R

Convex
8 ↵ 2 R+ , J(↵x) = ↵J(x)

+

Piecewise regular ball , Union of linear models (T )T 2T

T
T0

x0

x

J(x) = ||x||1 T
T = sparse
vectors

T0

x
T

x
|x1 |+||x2,3 ||

T = block
sparse
vectors

0

x

x
x0

0

J(x) = ||x||⇤

T = low-rank
matrices

J(x) = ||x||1

T = antisparse
vectors
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

xi}

|x|
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}

@J(x)
0

x
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

I = supp(x) = {i  xi 6= 0}
Tx = {⌘  supp(⌘) = I}

Definition:

Tx = VectHull(@J(x))?

@J(x)
0

x

Tx
Subdifferentials and Models
@J(x) = {⌘  8 y, J(y) > J(x)+h⌘, y

|x|

xi}

Example: J(x) = ||x||1
⇢
supp(⌘) = I,
@||x||1 = ⌘ 
8 j 2 I, |⌘j | 6 1
/

@J(x)
0

I = supp(x) = {i  xi 6= 0}

Tx

Tx = {⌘  supp(⌘) = I}
ex = sign(x)

Definition:

Tx = VectHull(@J(x))?
⌘ 2 @J(x)

ex x

=)

ProjTx (⌘) = ex
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

x

@J(x)

x

0

Tx = {z  supp(z) ⇢ supp(x)}
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}

x

@J(x)

x

0

x

@J(x)

x

0
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

x

@J(x)

x

0

⇤

x

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

@J(x)

x

0

x
@J(x)
Examples
`1 sparsity: J(x) = ||x||1
ex = sign(x)

Tx = {z  supp(z) ⇢ supp(x)}
P
Structured sparsity: J(x) = b ||xb ||
N (a) = a/||a||
ex = (N (xb ))b2B
Tx = {z  supp(z) ⇢ supp(x)}
Nuclear norm: J(x) = ||x||⇤

ex = U V

⇤

x = U ⇤V ⇤
SVD:

⇤
Tx = {z  U? zV? = 0}

I = {i  |xi | = ||x||1 }
Anti-sparsity: J(x) = ||x||1
Tx = {y  yI / sign(xI )}
ex = |I| 1 sign(x)

x

@J(x)

x

0

x

@J(x)

@J(x)

x

0

x
@J(x)

x

x

0
Overview

• Compressed Sensing and Inverse Problems

• Convex Regularization with Gauges

• Performance Guarantees
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

x?

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:

D(x0 ) = Im(

⇤

⌘

x?

)  @J(x0 )

@J(x0 )
x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(

⇤

⌘

@J(x0 )

x?

)  @J(x0 )
⇤
)  ri(@J(x0 ))

x=

x0
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)
Dual Certificate and L2 Stability
Noiseless recovery:

min J(x)

x= x0

(P0 )

Proposition:
x0 solution of (P0 ) () 9 ⌘ 2 D(x0 )

Dual certificates:
D(x0 ) = Im(
¯
Tight dual certificates: D(x0 ) = Im(
Theorem:
¯
If 9 ⌘ 2 D(x0 ), for

⌘

@J(x0 )

x?

x=

x0

⇤

)  @J(x0 )
⇤
)  ri(@J(x0 ))
[Fadili et al. 2013]

⇠ ||w|| one has ||x?

x0 || = O(||w||)

[Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 .
[Grassmair 2012]: J(x? x0 ) = O(||w||).
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

⇠ N (0, 1), i.i.d.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.

[Rudelson, Vershynin 2006]
[Chandrasekaran et al. 2011]

P > 2s log (N/s)
¯
Then 9⌘ 2 D(x0 ) with high probability on

Low-rank matrices: J = || · ||⇤ .

Theorem: Let r = rank(x0 ). If

.

[Chandrasekaran et al. 2011]

x0 2 RN1 ⇥N2

P > 3r(N1 + N2 r)
¯
Then 9⌘ 2 D(x0 ) with high probability on

! Similar results for || · ||1,2 , || · ||1 .

.
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

T = T x0
e = ex0
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

T = T x0
e = ex0

argmin
⌘=

⇤ q,⌘

T =e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =
Proposition:

One has

T = T x0
e = ex0

argmin
⌘=

⌘0 = (

⇤ q,⌘

+
T

T =e

)⇤ e

||q||
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]
Minimal-norm Certificate
⌘ 2 D(x0 )

=)

⇢

⌘ = ⇤q
ProjT (⌘) = e

Minimal-norm pre-certificate: ⌘0 =

argmin
⌘=

One has

⌘0 = (

¯
If ⌘0 2 D(x0 ) and

⇤ q,⌘

+
T

T =e

||q||

)⇤ e

⇠ ||w||,

Proposition:
Theorem:

T = T x0
e = ex0

the unique solution x? of P (y) for y = x0 + w satisfies

Tx ? = T x 0

and ||x?

x0 || = O(||w||) [Vaiter et al. 2013]

[Fuchs 2004]: J = || · ||1 .
[Vaiter et al. 2011]: J = ||D⇤ · ||1 .
[Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

Sparse vectors: J = || · ||1 .

Theorem: Let s = ||x0 ||0 . If

i,j

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on
.
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability
P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )
Compressed Sensing Setting
Random matrix:

2 RP ⇥N ,

i,j

Sparse vectors: J = || · ||1 .

⇠ N (0, 1), i.i.d.
[Wainwright 2009]
[Dossal et al. 2011]

Theorem: Let s = ||x0 ||0 . If

P > 2s log(N )

¯
Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on

Phase
transitions:

L2 stability
P ⇠ 2s log(N/s)

vs.

.

Model stability

! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 .

P ⇠ 2s log(N )

! Not using RIP technics (non-uniform result on x0 ).
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

i)

x0

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0
1-D Sparse Spikes Deconvolution
⇥x =

xi (·

x0

i)

i

J(x) = ||x||1

Increasing :
reduces correlation.
reduces resolution.

x0

||⌘0,I c ||1
2

1
0

10

20

I = {j  x0 (j) 6= 0}
||⌘0,I c ||1 < 1
()
¯
⌘0 2 D(x0 )
()
support recovery.
Conclusion
Gauges: encode linear models as singular points.
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Conclusion
Gauges: encode linear models as singular points.

Performance measures

L2 error
model

di↵erent CS guarantees

Specific certificate ⌘0 .
Open problems:
– Approximate model recovery Tx? ⇡ Tx0 .
– CS performance with complicated gauges (e.g. TV).

More Related Content

What's hot

Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGabriel Peyré
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionGabriel Peyré
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse RepresentationGabriel Peyré
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationGabriel Peyré
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualStéphane Canu
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyFrank Nielsen
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image ProcessingGabriel Peyré
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexityFrank Nielsen
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsGabriel Peyré
 
Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgosRoziq Bahtiar
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon informationFrank Nielsen
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3jainatin
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingGabriel Peyré
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsGabriel Peyré
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalStéphane Canu
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slackStéphane Canu
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas EberleBigMC
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheorySEENET-MTP
 

What's hot (20)

Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Lecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dualLecture 2: linear SVM in the dual
Lecture 2: linear SVM in the dual
 
A series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropyA series of maximum entropy upper bounds of the differential entropy
A series of maximum entropy upper bounds of the differential entropy
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Signal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse ProblemsSignal Processing Course : Sparse Regularization of Inverse Problems
Signal Processing Course : Sparse Regularization of Inverse Problems
 
Open GL 04 linealgos
Open GL 04 linealgosOpen GL 04 linealgos
Open GL 04 linealgos
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Image Processing 3
Image Processing 3Image Processing 3
Image Processing 3
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 
Lecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primalLecture 1: linear SVM in the primal
Lecture 1: linear SVM in the primal
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Andreas Eberle
Andreas EberleAndreas Eberle
Andreas Eberle
 
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge TheoryL. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
L. Jonke - A Twisted Look on Kappa-Minkowski: U(1) Gauge Theory
 

Similar to Low Complexity Regularization of Inverse Problems

Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Jagadeeswaran Rathinavel
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationMark Chang
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)STAIR Lab, Chiba Institute of Technology
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012Zheng Mengdi
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
 
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodNonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodTasuku Soma
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniquesKrishna Gali
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Michael Lie
 

Similar to Low Complexity Regularization of Inverse Problems (20)

ch3.ppt
ch3.pptch3.ppt
ch3.ppt
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
Modeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential EquationModeling the Dynamics of SGD by Stochastic Differential Equation
Modeling the Dynamics of SGD by Stochastic Differential Equation
 
sada_pres
sada_pressada_pres
sada_pres
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares MethodNonconvex Compressed Sensing with the Sum-of-Squares Method
Nonconvex Compressed Sensing with the Sum-of-Squares Method
 
ML unit2.pptx
ML unit2.pptxML unit2.pptx
ML unit2.pptx
 
Integration techniques
Integration techniquesIntegration techniques
Integration techniques
 
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
 

More from Gabriel Peyré

Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : IntroductionGabriel Peyré
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusGabriel Peyré
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoveryGabriel Peyré
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseGabriel Peyré
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesGabriel Peyré
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : FourierGabriel Peyré
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : DenoisingGabriel Peyré
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingGabriel Peyré
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : ApproximationGabriel Peyré
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : WaveletsGabriel Peyré
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed SensingGabriel Peyré
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesGabriel Peyré
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal TransportGabriel Peyré
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneGabriel Peyré
 

More from Gabriel Peyré (14)

Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 
Signal Processing Course : Fourier
Signal Processing Course : FourierSignal Processing Course : Fourier
Signal Processing Course : Fourier
 
Signal Processing Course : Denoising
Signal Processing Course : DenoisingSignal Processing Course : Denoising
Signal Processing Course : Denoising
 
Signal Processing Course : Compressed Sensing
Signal Processing Course : Compressed SensingSignal Processing Course : Compressed Sensing
Signal Processing Course : Compressed Sensing
 
Signal Processing Course : Approximation
Signal Processing Course : ApproximationSignal Processing Course : Approximation
Signal Processing Course : Approximation
 
Signal Processing Course : Wavelets
Signal Processing Course : WaveletsSignal Processing Course : Wavelets
Signal Processing Course : Wavelets
 
Sparsity and Compressed Sensing
Sparsity and Compressed SensingSparsity and Compressed Sensing
Sparsity and Compressed Sensing
 
Optimal Transport in Imaging Sciences
Optimal Transport in Imaging SciencesOptimal Transport in Imaging Sciences
Optimal Transport in Imaging Sciences
 
An Introduction to Optimal Transport
An Introduction to Optimal TransportAn Introduction to Optimal Transport
An Introduction to Optimal Transport
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 

Recently uploaded

INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17Celine George
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatYousafMalik24
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxCarlos105
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomnelietumpap1
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designMIPLM
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxAnupkumar Sharma
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 

Recently uploaded (20)

INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptxLEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
LEFT_ON_C'N_ PRELIMS_EL_DORADO_2024.pptx
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17How to Add Barcode on PDF Report in Odoo 17
How to Add Barcode on PDF Report in Odoo 17
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
Earth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice greatEarth Day Presentation wow hello nice great
Earth Day Presentation wow hello nice great
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptxBarangay Council for the Protection of Children (BCPC) Orientation.pptx
Barangay Council for the Protection of Children (BCPC) Orientation.pptx
 
ENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choomENGLISH6-Q4-W3.pptxqurter our high choom
ENGLISH6-Q4-W3.pptxqurter our high choom
 
Keynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-designKeynote by Prof. Wurzer at Nordex about IP-design
Keynote by Prof. Wurzer at Nordex about IP-design
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptxMULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
MULTIDISCIPLINRY NATURE OF THE ENVIRONMENTAL STUDIES.pptx
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 

Low Complexity Regularization of Inverse Problems

  • 1. Low Complexity Regularization of Inverse Problems Gabriel Peyré Joint works with: Samuel Vaiter Jalal Fadili Charles Dossal Mohammad Golbabaee VISI www.numerical-tours.com N
  • 2. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 3. Single Pixel Camera (Rice) x0 ˜
  • 4. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures N micro-mirrors
  • 5. Single Pixel Camera (Rice) x0 ˜ y[i] = hx0 , 'i i P measures P/N = 1 N micro-mirrors P/N = 0.16 P/N = 0.02
  • 6. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f 2 x0 2 L ˜ array resolution CS hardware x0 2 R N micro mirrors RN . y 2 RP
  • 7. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f x0 2 L ˜ array resolution x0 2 R N micro mirrors y 2 RP CS hardware , , ... 2 RN . , Operator x0
  • 8. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP
  • 9. Inverse Problems Recovering x0 RN from noisy observations y = x 0 + w 2 RP Examples: Inpainting, super-resolution, compressed-sensing x0 x0
  • 10. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter
  • 11. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity
  • 12. Inverse Problem Regularization Observations: y = x0 + w 2 RP . Estimator: x(y) depends only on observations y parameter Example: variational methods 1 x(y) 2 argmin ||y x||2 + J(x) x2RN 2 Data fidelity Regularity Performance analysis: ! Criteria on (x0 , ||w||, ) to ensure L2 stability ||x(y) x0 || = O(||w||) Model stability (e.g. spikes location)
  • 13. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 14. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: T Coe cients x Image x
  • 15. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x
  • 16. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Image x D Analysis sparsity: Image x Gradient D⇤ x
  • 17. Union of Linear Models for Data Processing Union of models: T 2 T linear spaces. Synthesis sparsity: Structured sparsity: T Coe cients x Analysis sparsity: Image x D Low-rank: Image x Gradient D⇤ x S1,· Multi-spectral imaging: Pr xi,· = j=1 Ai,j Sj,· S2,· x S3,·
  • 18. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x)
  • 19. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T x J(x) = ||x||1 T T = sparse vectors
  • 20. Gauges for Union of Linear Models Gauge: J :R N !R + Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) Piecewise regular ball , Union of linear models (T )T 2T T0 x0 x J(x) = ||x||1 T T = sparse vectors
  • 21. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T0 x |x1 |+||x2,3 || T = block sparse vectors 0
  • 22. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors x T x |x1 |+||x2,3 || T = block sparse vectors 0 x 0 J(x) = ||x||⇤ T = low-rank matrices
  • 23. Gauges for Union of Linear Models Gauge: J :R N !R Convex 8 ↵ 2 R+ , J(↵x) = ↵J(x) + Piecewise regular ball , Union of linear models (T )T 2T T T0 x0 x J(x) = ||x||1 T T = sparse vectors T0 x T x |x1 |+||x2,3 || T = block sparse vectors 0 x x x0 0 J(x) = ||x||⇤ T = low-rank matrices J(x) = ||x||1 T = antisparse vectors
  • 24. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y xi} |x|
  • 25. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} @J(x) 0 x
  • 26. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / I = supp(x) = {i xi 6= 0} Tx = {⌘ supp(⌘) = I} Definition: Tx = VectHull(@J(x))? @J(x) 0 x Tx
  • 27. Subdifferentials and Models @J(x) = {⌘ 8 y, J(y) > J(x)+h⌘, y |x| xi} Example: J(x) = ||x||1 ⇢ supp(⌘) = I, @||x||1 = ⌘ 8 j 2 I, |⌘j | 6 1 / @J(x) 0 I = supp(x) = {i xi 6= 0} Tx Tx = {⌘ supp(⌘) = I} ex = sign(x) Definition: Tx = VectHull(@J(x))? ⌘ 2 @J(x) ex x =) ProjTx (⌘) = ex
  • 28. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) x @J(x) x 0 Tx = {z supp(z) ⇢ supp(x)}
  • 29. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} x @J(x) x 0 x @J(x) x 0
  • 30. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V x @J(x) x 0 ⇤ x x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} @J(x) x 0 x @J(x)
  • 31. Examples `1 sparsity: J(x) = ||x||1 ex = sign(x) Tx = {z supp(z) ⇢ supp(x)} P Structured sparsity: J(x) = b ||xb || N (a) = a/||a|| ex = (N (xb ))b2B Tx = {z supp(z) ⇢ supp(x)} Nuclear norm: J(x) = ||x||⇤ ex = U V ⇤ x = U ⇤V ⇤ SVD: ⇤ Tx = {z U? zV? = 0} I = {i |xi | = ||x||1 } Anti-sparsity: J(x) = ||x||1 Tx = {y yI / sign(xI )} ex = |I| 1 sign(x) x @J(x) x 0 x @J(x) @J(x) x 0 x @J(x) x x 0
  • 32. Overview • Compressed Sensing and Inverse Problems • Convex Regularization with Gauges • Performance Guarantees
  • 33. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) x? x= x0
  • 34. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ⇤ ⌘ x? ) @J(x0 ) @J(x0 ) x= x0
  • 35. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( ⇤ ⌘ @J(x0 ) x? ) @J(x0 ) ⇤ ) ri(@J(x0 )) x= x0
  • 36. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||)
  • 37. Dual Certificate and L2 Stability Noiseless recovery: min J(x) x= x0 (P0 ) Proposition: x0 solution of (P0 ) () 9 ⌘ 2 D(x0 ) Dual certificates: D(x0 ) = Im( ¯ Tight dual certificates: D(x0 ) = Im( Theorem: ¯ If 9 ⌘ 2 D(x0 ), for ⌘ @J(x0 ) x? x= x0 ⇤ ) @J(x0 ) ⇤ ) ri(@J(x0 )) [Fadili et al. 2013] ⇠ ||w|| one has ||x? x0 || = O(||w||) [Grassmair, Haltmeier, Scherzer 2010]: J = || · ||1 . [Grassmair 2012]: J(x? x0 ) = O(||w||).
  • 38. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j ⇠ N (0, 1), i.i.d.
  • 39. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 40. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on .
  • 41. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Rudelson, Vershynin 2006] [Chandrasekaran et al. 2011] P > 2s log (N/s) ¯ Then 9⌘ 2 D(x0 ) with high probability on Low-rank matrices: J = || · ||⇤ . Theorem: Let r = rank(x0 ). If . [Chandrasekaran et al. 2011] x0 2 RN1 ⇥N2 P > 3r(N1 + N2 r) ¯ Then 9⌘ 2 D(x0 ) with high probability on ! Similar results for || · ||1,2 , || · ||1 . .
  • 42. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e T = T x0 e = ex0
  • 43. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = T = T x0 e = ex0 argmin ⌘= ⇤ q,⌘ T =e ||q||
  • 44. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = Proposition: One has T = T x0 e = ex0 argmin ⌘= ⌘0 = ( ⇤ q,⌘ + T T =e )⇤ e ||q||
  • 45. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013]
  • 46. Minimal-norm Certificate ⌘ 2 D(x0 ) =) ⇢ ⌘ = ⇤q ProjT (⌘) = e Minimal-norm pre-certificate: ⌘0 = argmin ⌘= One has ⌘0 = ( ¯ If ⌘0 2 D(x0 ) and ⇤ q,⌘ + T T =e ||q|| )⇤ e ⇠ ||w||, Proposition: Theorem: T = T x0 e = ex0 the unique solution x? of P (y) for y = x0 + w satisfies Tx ? = T x 0 and ||x? x0 || = O(||w||) [Vaiter et al. 2013] [Fuchs 2004]: J = || · ||1 . [Vaiter et al. 2011]: J = ||D⇤ · ||1 . [Bach 2008]: J = || · ||1,2 and J = || · ||⇤ .
  • 47. Compressed Sensing Setting Random matrix: 2 RP ⇥N , Sparse vectors: J = || · ||1 . Theorem: Let s = ||x0 ||0 . If i,j ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on .
  • 48. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability P ⇠ 2s log(N )
  • 49. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N )
  • 50. Compressed Sensing Setting Random matrix: 2 RP ⇥N , i,j Sparse vectors: J = || · ||1 . ⇠ N (0, 1), i.i.d. [Wainwright 2009] [Dossal et al. 2011] Theorem: Let s = ||x0 ||0 . If P > 2s log(N ) ¯ Then ⌘0 2 D( x0 ) wi th hi g h prob a b i l i ty on Phase transitions: L2 stability P ⇠ 2s log(N/s) vs. . Model stability ! Similar results for || · ||1,2 , || · ||⇤ , || · ||1 . P ⇠ 2s log(N ) ! Not using RIP technics (non-uniform result on x0 ).
  • 51. 1-D Sparse Spikes Deconvolution ⇥x = xi (· i) x0 i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0
  • 52. 1-D Sparse Spikes Deconvolution ⇥x = xi (· x0 i) i J(x) = ||x||1 Increasing : reduces correlation. reduces resolution. x0 ||⌘0,I c ||1 2 1 0 10 20 I = {j x0 (j) 6= 0} ||⌘0,I c ||1 < 1 () ¯ ⌘0 2 D(x0 ) () support recovery.
  • 53. Conclusion Gauges: encode linear models as singular points.
  • 54. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees
  • 55. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 .
  • 56. Conclusion Gauges: encode linear models as singular points. Performance measures L2 error model di↵erent CS guarantees Specific certificate ⌘0 . Open problems: – Approximate model recovery Tx? ⇡ Tx0 . – CS performance with complicated gauges (e.g. TV).