FAQFAQ   SuchenSuchen     BenutzergruppenBenutzergruppen   RegistrierenRegistrieren 
 ProfilProfil   Einloggen, um private Nachrichten zu lesenEinloggen, um private Nachrichten zu lesen   LoginLogin 

Deep Learning for Design and Physical Neural Networks

 
Neues Thema eröffnen   Neue Antwort erstellen     Foren-Übersicht -> Tipps zum Bauen und Werfen
Vorheriges Thema anzeigen :: Nächstes Thema anzeigen  
Autor Nachricht
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 19.09.2018, 20:36    Titel: Deep Learning for Design and Physical Neural Networks Antworten mit Zitat

Deep learning is a neural networks training technology or a like.

You can build a neuralk networks program in your computer , feed it with 50000 different automobiles shapes and when training is over , you can show your automobile to the network and it optimzes your design in to highly artistic one.

Physical neural networks is a analog computer , you can print with 3d printer and it works as above .

It is a filter which many filters makes a deep learning neural networks and it does not need electronics or electricity but worlks when sees at the daylight image.

I will post the paper names and their links at below.

best,

mustafa umut sarac
istanbul


Zuletzt bearbeitet von Mustafa Umut Sarac am 20.09.2018, 16:31, insgesamt 3-mal bearbeitet
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 19.09.2018, 20:42    Titel: Antworten mit Zitat

Geodesic Convolutional Shape Optimization
Pierre Baque * 1 Edoardo Remelli * 1 Franc¸ois Fleuret 2 1 Pascal Fua 1

Abstract
Aerodynamic shape optimization has many industrial applications. Existing methods, however, are so computationally demanding that typical engineering practices are to either simply try a limited
number of hand-designed shapes or restrict oneself to shapes that can be parameterized using only few degrees of freedom.
In this work, we introduce a new way to optimize complex shapes fast and accurately. To this end, we train Geodesic Convolutional Neural Networks to emulate a fluidynamics simulator. The key to making this approach practical is remeshing the original shape using a poly-cube map,
which makes it possible to perform the computations on GPUs instead of CPUs. The neural net is then used to formulate an objective function
that is differentiable with respect to the shape parameters, which can then be optimized using a gradient-based technique. This outperforms stateof-the-art methods by 5 to 20% for standard problems and, even more importantly, our approach applies to cases that previous methods cannot handle.

aeroarXiv:1802.04016v1 [cs.CE] 12 Feb 2018
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 19.09.2018, 20:46    Titel: Antworten mit Zitat

Cite as: X. Lin et al., Science
10.1126/science.aat8084 (2018).

REPORTS
First release: 26 July 2018 www.sciencemag.org (Page numbers not final at time of first release)

All-optical machine learning using diffractive deep neural
networks

1
Deep learning is one of the fastest-growing machine learning
methods (1), and it uses multi-layered artificial neural networks implemented in a computer to digitally learn data representation and abstraction, and perform advanced tasks,
comparable to or even superior than the performance of human experts. Recent examples where deep learning has made
major advances in machine learning include medical image
analysis (2), speech recognition (3), language translation (4),
image classification (5), among others (1, 6). Beyond some of
these mainstream applications, deep learning methods are
also being used for solving inverse imaging problems (7–13).
We introduce an all-optical deep learning framework,
where the neural network is physically formed by multiple
layers of diffractive surfaces that work in collaboration to optically perform an arbitrary function that the network can
statistically learn. While the inference/prediction of the physical network is all-optical, the learning part that leads to its
design is done through a computer. We term this framework
as Diffractive Deep Neural Network (D2NN) and demonstrate
its inference capabilities through both simulations and experiments. Our D2NN can be physically created by using several
transmissive and/or reflective layers (14), where each point
on a given layer either transmits or reflects the incoming
wave, representing an artificial neuron that is connected to
other neurons of the following layers through optical diffraction (Fig. 1A). Following Huygens’ Principle, our terminology
is based on each point on a given layer acting as a secondary
source of a wave, the amplitude and phase of which are determined by the product of the input wave and the complexvalued transmission or reflection coefficient at that point; see
(14) for an analysis of the waves within a D2NN. Therefore,
an artificial neuron in a D2NN is connected to other neurons
of the following layer through a secondary wave that is modulated in amplitude and phase by both the input interference
pattern created by the earlier layers and the local transmission/reflection coefficient at that point. As an analogy to
standard deep neural networks (Fig. 1D), one can consider
the transmission/reflection coefficient of each point/neuron
as a multiplicative “bias” term, which is a learnable network
parameter that is iteratively adjusted during the training process of the diffractive network, using an error back-propagation method. After this numerical training phase, the D2NN
design is fixed and the transmission/reflection coefficients of
the neurons of all the layers are determined. This D2NN design, once physically fabricated using e.g., 3D-printing, lithography, etc., can then perform, at the speed of light, the
specific task that it is trained for, using only optical diffraction and passive optical components/layers that do not need
power, creating an efficient and fast way of implementing
machine learning tasks.
In general, phase and amplitude of each neuron can be a
learnable parameter, providing a complex-valued modulation
at each layer, which improves the inference performance of
the diffractive network (fig. S1) (14). For coherent transmissive networks with phase-only modulation, each layer can be
approximated as a thin optical element (Fig. 1). Through deep
learning, the phase values of the neurons of each layer of the
diffractive network are iteratively adjusted (trained) to
All-optical machine learning using diffractive deep neural
networks
Xing Lin1,2,3*, Yair Rivenson1,2,3*, Nezih T. Yardimci1,3, Muhammed Veli1,2,3, Yi Luo1,2,3, Mona Jarrahi1,3, Aydogan Ozcan1,2,3,4†
1Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA. 2Bioengineering Department, University of California, Los Angeles,
CA, 90095, USA. 3California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA. 4Department of Surgery, David Geffen School of
Medicine, University of California, Los Angeles, CA, 90095, USA.
*These authors contributed equally to this work.
†Corresponding author. Email: ozcan@ucla.edu
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 15:33    Titel: Antworten mit Zitat

This 3D-printed AI construct analyzes by bending light
Devin Coldewey@techcrunch / 2 months ago

Machine learning is everywhere these days, but it’s usually more or less invisible: it sits in the background, optimizing audio or picking out faces in images. But this new system is not only visible, but physical: it performs AI-type analysis not by crunching numbers, but by bending light. It’s weird and unique, but counter-intuitively, it’s an excellent demonstration of how deceptively simple these “artificial intelligence” systems are.

Machine learning systems, which we frequently refer to as a form of artificial intelligence, at their heart are just a series of calculations made on a set of data, each building on the last or feeding back into a loop. The calculations themselves aren’t particularly complex — though they aren’t the kind of math you’d want to do with a pen and paper. Ultimately all that simple math produces a probability that the data going in is a match for various patterns it has “learned” to recognize.

The thing is, though, that once these “layers” have been “trained” and the math finalized, in many ways it’s performing the same calculations over and over again. Usually that just means it can be optimized and won’t take up that much space or CPU power. But researchers from UCLA show that it can literally be solidified, the layers themselves actual 3D-printed layers of transparent material, imprinted with complex diffraction patterns that do to light going through them what the math would have done to numbers.

If that’s a bit much to wrap your head around, think of a mechanical calculator. Nowadays it’s all done digitally in computer logic, but back in the day calculators used actual mechanical pieces moving around — something adding up to 10 would literally cause some piece to move to a new position. In a way this “diffractive deep neural network” is a lot like that: it uses and manipulates physical representations of numbers rather than electronic ones.

As the researchers put it:

Each point on a given layer either transmits or reflects an incoming wave, which represents an artificial neuron that is connected to other neurons of the following layers through optical diffraction. By altering the phase and amplitude, each “neuron” is tunable.

“Our all-optical deep learning framework can perform, at the speed of light, various complex functions that computer-based neural networks can implement,” write the researchers in the paper describing their system, published today in Science.

To demonstrate it they trained a deep learning model to recognize handwritten numerals. Once it was final, they took the layers of matrix math and converted it into a series of optical transformations. For example, a layer might add values together by refocusing the light from both onto a single area of the next layer — the real calculations are much more complex, but hopefully you get the idea.

By arranging millions of these tiny transformations on the printed plates, the light that enters one end comes out the other structured in such a way that the system can tell whether it’s a 1, 2, 3 and so on with better than 90 percent accuracy.

What use is that, you ask? Well, none in its current form. But neural networks are extremely flexible tools, and it would be perfectly possible to have a system recognize letters instead of numbers, making an optical character recognition system work totally in hardware with almost no power or calculation required. And why not basic face or figure recognition, no CPU necessary? How useful would that be to have in your camera?

The real limitations here are manufacturing ones: it’s difficult to create the diffractive plates with the level of precision required to perform some of the more demanding processing. After all, if you need to calculate something to the seventh decimal place, but the printed version is only accurate to the third, you’re going to run into trouble.

This is only a proof of concept — there’s no dire need for giant number-recognition machines — but it’s a fascinating one. The idea could prove to be influential in camera and machine learning technology — structuring light and data in the physical world rather than the digital one. It may feel like it’s going backwards, but perhaps the pendulum is simply swinging back the other direction.
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 15:34    Titel: Antworten mit Zitat





[img]https://techcrunch.com/wp-content/uploads/2018/07/optical-dnn.jpg?w=990&crop=1[/img]
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 15:37    Titel: Antworten mit Zitat

The trick is to find most excited neurons , location of that data indicates what is the answer located.
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 17:22    Titel: Antworten mit Zitat

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20020051082.pdf

Prediction of aerodynamic coefficients using Neural Networks for Sparse Data
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 17:54    Titel: Antworten mit Zitat

Experimental Study and Neural Network Modeling of
Aerodynamic Characteristics of Canard Aircraft at
High Angles of Attack
Dmitry Ignatyev * and Alexander Khrabrov
Central Aerohydrodynamic Institute, 140180 Zhukovsky, Moscow Region, Russia; khrabrov@tsagi.ru
* Correspondence: d.ignatyev@mail.ru
Received: 29 December 2017; Accepted: 28 February 2018; Published: 2 March 2018
Abstract: Flow over an aircraft at high angles of attack is characterized by a combination of separated
and vortical flows that interact with each other and with the airframe. As a result, there is a set of
phenomena negatively affecting the aircraft’s performance, stability and control, namely, degradation
of lifting force, nonlinear variation of pitching moment, positive damping, etc. Wind tunnel study of
aerodynamic characteristics of a prospective transonic aircraft, which is in a canard configuration,
is discussed in the paper. A three-stage experimental campaign was undertaken. In the first stage,
a steady aerodynamic experiment was conducted. The influence of a reduced oscillation frequency
and angle of attack on unsteady aerodynamic characteristics was studied in the second stage. In the
third stage, forced large-amplitude oscillation tests were carried out for the detailed investigation of
the unsteady aerodynamics in the extended flight envelope. The experimental results demonstrate
the strongly nonlinear behavior of the aerodynamic characteristics because of canard vortex effects
on the wing. The obtained data are used to design and test mathematical models of unsteady
aerodynamics via different popular approaches, namely the Neural Network (NN) technique and the
phenomenological state space modeling technique. Different NN architectures, namely feed-forward
and recurrent, are considered and compared. Thorough analysis of the performance of the models
revealed that the Recurrent Neural Network (RNN) is a universal approximation tool for modeling
of dynamic processes with high generalization abilities
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Mustafa Umut Sarac
Indoor-Athlet


Anmeldungsdatum: 19.09.2018
Beiträge: 50
Wohnort: Istanbul - Turkey

BeitragVerfasst am: 09.10.2018, 18:00    Titel: Antworten mit Zitat

from above paper , explains how neural networks used to study nonlinearity and how that neural network works :


NN Architectures
The FFNN, which scheme is given in Figure 15a, can be considered as a directed graph with neurons placed in it nodes. The neurons of the first layer do not implement nonlinear mapping but distribute input signals between neurons of the first hidden layer. Neuron of the hidden layer is an
elementary calculating unit. A set of signals , 1... j Sj n = from the input layer are fed into the neuron of the hidden layer. Coefficients wik correspond to the signal transmit connections and are the weight factor while summing the input signals. Neuron bias kb is added to the weighted sum of the
input signals, and the resulting sum is mapped through nonlinear activation function kf . Mapped signal φk goes forward to the next-layer neurons, which forwards to next layer. final layer is the result.
Nach oben
Benutzer-Profile anzeigen Private Nachricht senden AIM-Name
Beiträge der letzten Zeit anzeigen:   
Neues Thema eröffnen   Neue Antwort erstellen     Foren-Übersicht -> Tipps zum Bauen und Werfen Alle Zeiten sind GMT + 1 Stunde
Seite 1 von 1

 
Gehe zu:  
Du kannst keine Beiträge in dieses Forum schreiben.
Du kannst auf Beiträge in diesem Forum nicht antworten.
Du kannst deine Beiträge in diesem Forum nicht bearbeiten.
Du kannst deine Beiträge in diesem Forum nicht löschen.
Du kannst an Umfragen in diesem Forum nicht mitmachen.


Powered by phpBB © 2001, 2002 phpBB Group