Vista Normal

Hay nuevos artículos disponibles. Pincha para refrescar la página.
AnteayerSalida Principal

Tiny Tapeout 4: A PWM clone of Covox Speech Thing

21 Junio 2024 at 20:00

Tiny Tapout is an interesting project, leveraging the power of cloud computing and collaborative purchasing to make the mysterious art of IC design more accessible for hardware hackers. [Yeo Kheng Meng] is one such hacker, and they have produced their very first custom IC for use with their retrocomputing efforts. As they lament, they left it a little late for the shuttle run submission deadline, so they came up with a very simple project with the equivalent behaviour of the Covox Speech Thing, which is just a basic R-2R ladder DAC hanging from a PC parallel port.

The computed gate-level routing of the ASIC layout

The plan was to capture an 8-bit input bus and compare it against a free-running counter. If the input value is larger than the counter, the output goes high; otherwise, it goes low. This produces a PWM waveform representing the input value. Following the digital output with an RC low-pass filter will generate an analogue representation. It’s all very simple stuff. A few details to contend with are specific to Tiny Tapout, such as taking note of the enable and global resets. These are passed down from the chip-level wrapper to indicate when your design has control of the physical IOs and is selected for operation. [Yeo] noticed that the GitHub post-synthesis simulation failed due to not taking note of the reset condition and initialising those pesky flip-flops.

After throwing the design down onto a Mimas A7 Artix 7 FPGA board for a quick test, data sent from a parallel port-connected PC popped out as a PWM waveform as expected, and some test audio could be played. Whilst it may be true that you don’t have to prototype on an FPGA, and some would argue that it’s a lot of extra effort for many cases, without a good quality graphical simulation and robust testbench, you’re practically working blind. And that’s not how working chips get made.

If you want to read into Tiny Tapeout some more, then we’ve a quick guide for that. Or, perhaps hear it direct from the team instead?

Human Brains Can Tell Deepfake Voices from Real Ones

Por: Maya Posch
19 Junio 2024 at 05:00

Although it’s generally accepted that synthesized voices which mimic real people’s voices (so-called ‘deepfakes’) can be pretty convincing, what does our brain really think of these mimicry attempts? To answer this question, researchers at the University of Zurich put a number of volunteers into fMRI scanners, allowing them to observe how their brains would react to real and a synthesized voices.  The perhaps somewhat surprising finding is that the human brain shows differences in two brain regions depending on whether it’s hearing a real or fake voice, meaning that on some level we are aware of the fact that we are listening to a deepfake.

The detailed findings by [Claudia Roswandowitz] and colleagues are published in Communications Biology. For the study, 25 volunteers were asked to accept or reject the voice samples they heard as being natural or synthesized, as well as perform identity matching with the supposed speaker. The natural voices came from four male (German) speakers, whose voices were also used to train the synthesis model with. Not only did identity matching performance crater with the synthesized voices, the resulting fMRI scans showed very different brain activity depending on whether it was the natural or synthesized voice.

One of these regions was the auditory cortex, which clearly indicates that there were acoustic differences between the natural and fake voice, the other was the nucleus accumbens (NAcc). This part of the basal forebrain is involved in the cognitive processing of e.g. motivation, reward and reinforcement learning, which plays a key role in social, maternal and addictive behavior. Overall, the deepfake voices are characterized by acoustic imperfections, and do not elicit the same sense of recognition (and thus reward sensation) as natural voices do.

Until deepfake voices can be made much better, it would appear that we are still safe, for now.

❌
❌