Generative Adversarial Networks as an Accommodative Memory for Cognitive
Waveform Synthesis
Abstract
This paper presents a practical example where generative adversarial
networks (GANs) can be employed as an accommodative memory unit (AMU).
An array of such units can memorize/learn any algorithm’s results. This
kind of memory can accommodate their response to new unseen scenarios by
traversing the GAN’s latent space and finding the best answer.
Accordingly, accommodative memory (AM) can be viewed as a generalization
of look-up tables (LUT), in which writing and reading operations are
equivalent to training and inference of an AMU or traversing its latent
space.
We explore cognitive radar waveform synthesis to showcase a practical
application of the proposed AM concept. In this regard, a Wasserstein
GAN (WGAN) is trained as an AMU for a particular ambiguity function (AF)
shaping scenario.
Here, retrieving information for the most frequent scenarios, called
input basis scenarios (IBSs), involves only the inference of the
generator. For more complicated input scenarios, the memory accommodates
the input by traversing the latent space using ADAM optimization.
Compared to redesigning the AF, the AM can remember or accommodate new
scenarios several orders of magnitude faster at the expense of more
memory hardware.
As an auxiliary result, we also demonstrate that traditional algorithms
can be defeated in terms of suppression level by penalizing the loss
function according to desired AF.