Engineering and Technology Updates
New imaging technique brings us closer to simplified, low-cost agricultural quality assessment
Hyperspectral imaging is a useful technique for analyzing the chemical composition of food and agricultural products. However, it is a costly and complicated procedure, which limits its practical application. A team of University of Illinois Urbana-Champaign researchers has developed a method to reconstruct hyperspectral images from standard (Red, Green, Blue) RGB images using deep machine learning. This technique can greatly simplify the analytical process and potentially revolutionize product assessment in the agricultural industry. The researchers tested their method by analyzing the chemical composition of sweet potatoes. They focused on soluble solid content in one study and dry matter in a second study — important features that influence the taste, nutritional value, marketability, and processing suitability of sweet potatoes. Using deep learning models, they converted the information from RGB images into hyperspectral images. “With RGB images, you can only detect visible attributes like color, shape, size, and external defects; you can’t detect any chemical parameters. In RGB images you have wavelengths from 400 to 700 nanometers, and three channels — red, green, and blue. But with hyperspectral images you have many channels and wavelengths from 700 to 1000 nm. With deep learning methods, we can map and reconstruct that range so we now can detect the chemical attributes from RGB images,” said Mohammed Kamruzzaman, assistant professor in ABE and corresponding author on both papers. Hyperspectral imaging captures a detailed spectral signature at spatial locations across hundreds of narrow bands, combining to form hypercubes. Applying cutting-edge deep learning-based algorithms, Kamruzzaman and Ahmed were able to create a model to reconstruct the hypercubes from RGB images to provide the relevant information for product analysis. They calibrated the spectral model with reconstructed hyperspectral images of sweet potatoes, achieving over 70% accuracy in predicting soluble solid content and 88% accuracy in dry matter content, marking a significant improvement over previous studies. In a third paper, the research team applied deep learning methods to reconstruct hyperspectral images for predicting chick embryo mortality, which has applications for the egg and hatchery industry. They explored different techniques and made recommendations for the most accurate approach. “Our results show great promise for revolutionizing agricultural product quality assessment. By reconstructing detailed chemical information from simple RGB images, we’re opening new possibilities for affordable, accessible analysis. While challenges remain in scaling this technology for industrial use, the potential to transform quality control across the agricultural sector makes this a truly exciting endeavor,” Kamruzzaman concluded.
Source: https://www.sciencedaily.com/releases/2024/09/240930160206.htm
New security protocol shields data from attackers during cloud-based computation
Deep-learning models are being used in many fields, from health care diagnostics to financial forecasting. However, these models are so computationally intensive that they require the use of powerful cloud-based servers. This reliance on cloud computing poses significant security risks, particularly in areas like health care, where hospitals may be hesitant to use AI tools to analyze confidential patient data due to privacy concerns. To tackle this pressing issue, MIT researchers have developed a security protocol that leverages the quantum properties of light to guarantee that data sent to and from a cloud server remain secure during deep-learning computations. By encoding data into the laser light used in fiber optic communications systems, the protocol exploits the fundamental principles of quantum mechanics, making it impossible for attackers to copy or intercept the information without detection. Moreover, the technique guarantees security without compromising the accuracy of the deep-learning models. In tests, the researcher demonstrated that their protocol could maintain 96 percent accuracy while ensuring robust security measures. The cloud-based computation scenario the researchers focused on involves two parties — a client that has confidential data, like medical images, and a central server that controls a deep learning model. The client wants to use the deep-learning model to make a prediction, such as whether a patient has cancer based on medical images, without revealing information about the patient. In this scenario, sensitive data must be sent to generate a prediction. However, during the process the patient data must remain secure. Also, the server does not want to reveal any parts of the proprietary model that a company like OpenAI spent years and millions of dollars building. In digital computation, a bad actor could easily copy the data sent from the server or the client. Quantum information, on the other hand, cannot be perfectly copied. The researchers leverage this property, known as the no-cloning principle, in their security protocol. For the researchers’ protocol, the server encodes the weights of a deep neural network into an optical field using laser light. A neural network is a deep-learning model that consists of layers of interconnected nodes, or neurons, that perform computation on data. The weights are the components of the model that do the mathematical operations on each input, one layer at a time. The output of one layer is fed into the next layer until the final layer generates a prediction. The server transmits the network’s weights to the client, which implements operations to get a result based on their private data. The data remain shielded from the server. At the same time, the security protocol allows the client to measure only one result, and it prevents the client from copying the weights because of the quantum nature of light. Once the client feeds the first result into the next layer, the protocol is designed to cancel out the first layer so the client can’t learn anything else about the model. Instead of measuring all the incoming light from the server, the client only measures the light that is necessary to run the deep neural network and feed the result into the next layer. Then the client sends the residual light back to the server for security checks. Due to the no-cloning theorem, the client unavoidably applies tiny errors to the model while measuring its result. When the server receives the residual light from the client, the server can measure these errors to determine if any information was leaked. Importantly, this residual light is proven to not reveal the client data. Modern telecommunications equipment typically relies on optical fibers to transfer information because of the need to support massive bandwidth over long distances. Because this equipment already incorporates optical lasers, the researchers can encode data into light for their security protocol without any special hardware. When they tested their approach, the researchers found that it could guarantee security for server and client while enabling the deep neural network to achieve 96 percent accuracy. The tiny bit of information about the model that leaks when the client performs operations amounts to less than 10 percent of what an adversary would need to recover any hidden information. Working in the other direction, a malicious server could only obtain about 1 percent of the information it would need to steal the client’s data.
Source: https://www.sciencedaily.com/releases/2024/10/241001142659.htm
Helping robots zero in on the objects that matter
Imagine having to straighten up a messy kitchen, starting with a counter littered with sauce packets. If your goal is to wipe the counter clean, you might sweep up the packets as a group. If, however, you wanted to first pick out the mustard packets before throwing the rest away, you would sort more discriminately, by sauce type. MIT engineers have developed a method that enables robots to make similarly intuitive, task-relevant decisions. The team’s new approach, named Clio, enables a robot to identify the parts of a scene that matter, given the tasks at hand. With Clio, a robot takes in a list of tasks described in natural language and, based on those tasks, it then determines the level of granularity required to interpret its surroundings and “remember” only the parts of a scene that are relevant. In real experiments ranging from a cluttered cubicle to a five-story building on MIT’s campus, the team used Clio to automatically segment a scene at different levels of granularity, based on a set of tasks specified in natural-language prompts such as “move rack of magazines” and “get first aid kit.” The team also ran Clio in real-time on a quadruped robot. As the robot explored an office building, Clio identified and mapped only those parts of the scene that related to the robot’s tasks (such as retrieving a dog toy while ignoring piles of office supplies), allowing the robot to grasp the objects of interest. Clio is named after the Greek muse of history, for its ability to identify and remember only the elements that matter for a given task. The researchers envision that Clio would be useful in many situations and environments in which a robot would have to quickly survey and make sense of its surroundings in the context of its given task. Huge advances in the fields of computer vision and natural language processing have enabled robots to identify objects in their surroundings. But until recently, robots were only able to do so in “closed-set” scenarios, where they are programmed to work in a carefully curated and controlled environment, with a finite number of objects that the robot has been pretrained to recognize. In recent years, researchers have taken a more “open” approach to enable robots to recognize objects in more realistic settings. In the field of open-set recognition, researchers have leveraged deep-learning tools to build neural networks that can process billions of images from the internet, along with each image’s associated text. From millions of image-text pairs, a neural network learns from, then identifies, those segments in a scene that are characteristic of certain terms, such as a dog. A robot can then apply that neural network to spot a dog in a totally new scene. From millions of image-text pairs, a neural network learns from, then identifies, those segments in a scene that are characteristic of certain terms, such as a dog. A robot can then apply that neural network to spot a dog in a totally new scene. But a challenge still remains as to how to parse a scene in a useful way that is relevant for a particular task. With Clio, the MIT team aimed to enable robots to interpret their surroundings with a level of granularity that can be automatically tuned to the tasks at hand. For instance, given a task of moving a stack of books to a shelf, the robot should be able to determine that the entire stack of books is the task-relevant object. Likewise, if the task were to move only the green book from the rest of the stack, the robot should distinguish the green book as a single target object and disregard the rest of the scene — including the other books in the stack. The team’s approach combines state-of-the-art computer vision and large language models comprising neural networks that make connections among millions of open-source images and semantic text. They also incorporate mapping tools that automatically split an image into many small segments, which can be fed into the neural network to determine if certain segments are semantically similar. The researchers then leverage an idea from classic information theory called the “information bottleneck,” which they use to compress a number of image segments in a way that picks out and stores segments that are semantically most relevant to a given task. Going forward, the team plans to adapt Clio to be able to handle higher-level tasks and build upon recent advances in photorealistic visual scene representations.
Source: https://www.sciencedaily.com/releases/2024/09/240930160224.htm
Fluoride-free batteries: Safeguarding the environment and enhancing performance
A research team led by Professor Soojin Park and Seoha Nam from the Department of Chemistry at POSTECH, in partnership with Hansol Chemical’s Battery materials R&D center, has developed a new fluorine-free binder and electrolyte designed to advance eco-friendly, high-performance battery technology. As environmental concerns intensify, the importance of sustainable materials in battery technology is growing. Traditional lithium batteries rely on fluorinated compounds such as polyvinylidene fluoride (PVDF) binders and lithium hexafluorophosphate (LiPF6, LP) salts. However, this “PVDF-LP” system releases highly toxic hydrogen fluoride (HF), which reduces battery performance and lifespan. Furthermore, PVDF is non-biodegradable, and with the European Union (EU) tightening regulations on PFAS, a ban on these substances is expected by 2026. Researchers from POSTECH and Hansol Chemical have designed a non-fluorinated battery system to comply with upcoming environmental regulations and enhance battery performance. They created a lithium perchlorate (LiClO4, LC)-based electrolyte to replace fluorinated LP electrolytes along with a non-fluorinated aromatic polyamide (APA) binder using Hansol Chemical’s proprietary technology. This innovative “APA-LC” system is entirely free of fluorinated compounds. The “APA binder” reinforces the bonding between the cathode’s active material and the aluminum current collector, preventing electrode corrosion in the electrolyte and significantly extending battery life. Additionally, the “LC system,” enriched with lithium chloride (LiCl) and lithium oxide (Li2O), lowers the energy barrier at the interface to promote ion migration, leading to faster lithium diffusion and superior output performance compared to the existing LP system. Overall, the APA-LC system exhibited greater oxidation stability than the conventional PVDF-LP system and maintained 20% higher capacity retention after 200 cycles at a rapid charge/discharge rate of 1 C, within the 2.8-4.3 V range in a coin cell test. The research team applied the APA-LC system to produce a high-capacity 1.5 Ah (ampere-hour) pouch cell. The cell maintained excellent discharge capacity and demonstrated strong performance during fast-charging trials. This marks the world’s first successful demonstration of a battery system that is entirely scalable and practical, made entirely from non-fluorinated materials, without any fluorinated compounds. They haven’t just replaced fluorinated systems; but proven high-capacity retention and outstanding stability. Their solution will advance the sustainability of the battery industry, facilitating the shift to non-fluorinated battery systems while ensuring environmental compliance.
Source: https://www.sciencedaily.com/releases/2024/09/240926131910.htm
New organic thermoelectric device that can harvest energy at room temperature
Researchers have developed a new organic thermoelectric device that can harvest energy from ambient temperature. While thermoelectric devices have several uses today, hurdles still exist to their full utilization. By combining the unique abilities of organic materials, the team succeeded in developing a framework for thermoelectric power generation at room temperature without any temperature gradient. Thermoelectric devices, or thermoelectric generators, are a series of energy-generating materials that can convert heat into electricity so long as there is a temperature gradient — where one side of the device is hot and the other side is cool. Such devices have been a significant focus of research and development for their potential utility in harvesting waste heat from other energy-generating methods. Perhaps the most well-known use of thermoelectric generators is in space probes such as the Mars Curiosity rover or the Voyager probe. These machines are powered by radioisotope thermoelectric generators, where the heat generated from radioactive isotopes provides the temperature gradient for the thermoelectric devices to power their instruments. However, due to issues including high production cost, use of hazardous materials, low energy efficiency, and the necessity of relatively high temperatures, thermoelectric devices remain underutilized today. Researchers were investigating ways to make a thermoelectric device that could harvest energy from ambient temperature. Our lab focuses on the utility and application of organic compounds, and many organic compounds have unique properties where they can easily transfer energy between each other.” explains Professor Chihaya Adachi of Kyushu University’s Center for Organic Photonics and Electronics Research (OPERA) who led the study. “A good example of the power of organic compounds can be found in OLEDs or organic solar cells.” The key was to find compounds that work well as charge transfer interfaces, meaning that they can easily transfer electrons between each other. After testing various materials, the team found two viable compounds: copper phthalocyanine (CuPc) and copper hexadecafluoro phthalocyanine (F16CuPc). “To improve the thermoelectric property of this new interface, we also incorporated fullerenes and BCP,” continues Adachi. “These are known to be good facilitators of electron transport. Adding these compounds together significantly enhanced the device’s power. In the end, we had an optimized device with a 180 nm layer of CuPc, 320 nm of F16CuPc, 20 nm of fullerene, and 20 nm of BCP.” The optimized device had an open-circuit voltage of 384 mV, a short-circuit current density of 1.1 μA/cm2, and a maximum output of 94 nW/cm2. Moreover, all these results were achieved at room temperature without the use of a temperature gradient. There have been considerable advances in the development of thermoelectric devices, and our new proposed organic device will certainly help move things forward,” concludes Adachi. “We would like to continue working on this new device and see if we can optimize it further with different materials. We can even likely achieve a higher current density if we increase the device’s area, which is unusual even for organic materials. It just goes to show that organic materials hold amazing potential.”
Source: https://www.sciencedaily.com/releases/2024/09/240919115027.htm
Shrinking AR displays into eyeglasses to expand their use
Augmented reality (AR) takes digital images and superimposes them onto real-world views. But AR is more than a new way to play video games; it could transform surgery and self-driving cars. To make the technology easier to integrate into common personal devices, researchers reported how to combine two optical technologies into a single, high-resolution AR display. In an eyeglasses prototype, the researchers enhanced image quality with a computer algorithm that removed distortions. AR systems, like those in bulky goggles and automobile head-up displays, require portable optical components. But shrinking the typical four-lens AR system to the size of eyeglasses or smaller typically lowers the quality of the computer-generated image and reduces the field of view. Youguang Ma and colleagues may have found a solution for condensing the technology. They combined two optical technologies — a metasurface and a refractive lens — with a microLED screen (containing arrays of tiny green LEDs for projecting images) to create a compact, single-lens hybrid AR design. Their display’s metasurface is an ultrathin, lightweight silicon nitride film etched with a pattern. The pattern shapes and focuses light from the green microLEDs. Then, a black-and-green image forms on a refractive lens made from a synthetic polymer, which refines the image by sharpening and reducing aberrations in the light. The final image is projected out of the system and superimposed onto an object or screen. To further enhance the resolution of the projected image, Ma and the team used computer algorithms to identify minor imperfections in the optical system and correct them before light leaves the microLED. The researchers integrated the hybrid AR display into a pair of eyeglasses and tested the prototype’s performance with computer image enhancement. Projected images from the one-lens hybrid system had less than 2% distortion across a 30° field of view — image quality that’s on par with current commercial AR platforms with four lenses. The researchers then confirmed that their computer preprocessing algorithm improved a reprojected AR picture of a red panda. The reprojected red panda was 74.3% structurally similar to the original image — a 4% improvement from the uncorrected projection of the image. With additional development, the researchers say the platform could extend from green to full color and enable a new generation of mainstream AR glasses.
Source: https://www.sciencedaily.com/releases/2024/09/240925122923.htm
Chandrayaan-3 detects mysterious tremors on Moon
India’s Chandrayaan-3 mission has detected over 250 seismic signals in the Moon’s south polar region. Among these, 50 distinct signals remain unexplained, suggesting the possibility of Moonquakes. This marks the first time seismic data has been collected from the lunar south pole and the first since the Apollo missions. The Instrument for Lunar Seismic Activity (ILSA), housed aboard the Vikram lander, conducted this experiment at coordinates 69.37° South and 32.32° East, operating continuously for 190 hours between August 24 and September 4, 2023. ILSA is not only the first instrument to record ground vibrations in the Moon’s south polar region, but it is also pioneering in its use of sensors crafted through silicon micromachining technology on the lunar surface. The findings from this experiment have been meticulously analysed and published by researchers from the Indian Space Research Organisation (ISRO) in the scientific journal ICARUS. According to the research, out of the more than 250 seismic events recorded, approximately 200 can be linked to known activities, such as the movement of the Pragyan rover or the operation of other scientific instruments. However, around 50 signals remain unexplained, with no clear link to the rover’s movements or any other activity. “Further studies are needed to understand what may have caused these uncorrelated events,” said KV Sriram, Director of LEOS, told TOI. The most significant signals recorded by ILSA were associated with the navigation of the Pragyan rover. The longest continuous signal recorded lasted 14 minutes, and about 60 signals have been connected to Pragyan’s movement, which was controlled remotely. As the rover moved away from ILSA, researchers noted a systematic reduction in the amplitude of the recorded signals. For instance, when the rover was approximately 7 metres from the lander, the peak-to-peak amplitude was around 200 µg (microgravity), which decreased as the distance increased. Chandrayaan-3’s mission has provided significant advancements in lunar science, particularly in understanding seismic activity in the Moon’s south polar region. The discovery of unexplained seismic events opens new avenues for research and exploration. Continued studies are crucial to uncover the origins of these signals and to further our understanding of the Moon’s geological activity. ILSA is not only the first instrument to record ground vibrations in the Moon’s south polar region, but it is also pioneering in its use of sensors crafted through silicon micromachining technology on the lunar surface. According to the research, out of the more than 250 seismic events recorded, approximately 200 can be linked to known activities, such as the movement of the Pragyan rover or the operation of other scientific instruments. However, around 50 signals remain unexplained, with no clear link to the rover’s movements or any other activity. “Further studies are needed to understand what may have caused these uncorrelated events,” said KV Sriram, Director of LEOS, told TOI. The most significant signals recorded by ILSA were associated with the navigation of the Pragyan rover. The longest continuous signal recorded lasted 14 minutes, and about 60 signals have been connected to Pragyan’s movement, which was controlled remotely. As the rover moved away from ILSA, researchers noted a systematic reduction in the amplitude of the recorded signals. For instance, when the rover was approximately 7 metres from the lander, the peak-to-peak amplitude was around 200 µg (microgravity), which decreased as the distance increased.
Squid-inspired fabric for temperature-controlled clothing
Too warm with a jacket on but too cold without it? Athletic apparel brands boast temperature-controlling fabrics that adapt to every climate with lightweight but w arm products. Yet, consider a fabric that you can adjust to fit your specific temperature needs. Inspired by the dynamic color-changing properties of squid skin, researchers from the University of California, Irvine developed a method to manufacture a heat-adjusting material that is breathable and washable and can be integrated into flexible fabric. “Squid skin is complex, consisting of multiple layers that work together to manipulate light and change the animal’s overall coloration and patterning,” said author Alon Gorodetsky. “Some of the layers contain organs called chromatophores, which transition between expanded and contracted states (upon muscle action) to change how the skin transmits and reflects visible light.” Instead of manipulating visible light, the team engineered a composite material that operates in the infrared spectrum. As people heat up, they emit some of their heat as invisible, infrared radiation (this is how thermal cameras work). Clothing that manipulates and adapts to this emission and is fitted with thermoregulatory features can finely adjust to the desired temperature of the wearer. The material consists of a polymer covered with copper islands, and stretching it separates the islands and changes how it transmits and reflects infrared light. This innovation creates the possibility of controlling the temperature of a garment. In a prior publication in APL Bioengineering, the team modeled their composite material’s adaptive infrared properties. Here, they built upon the material to increase its functionality by making it washable, breathable, and integrated into fabric. The team layered a thin film onto the composite to enable easy washing without degradation — a practical consideration for any fabric. To make the composite material breathable, the team perforated it, producing an array of holes. The resulting product exhibited air and water vapor permeability similar to cotton fabrics. The team then adhered the material to a mesh to demonstrate straightforward fabric integration. Using Fourier transform infrared spectroscopy, the team tested the material’s adaptive infrared properties and used a sweating guarded hot plate to test the dynamic thermoregulatory properties. Even with simultaneous thin-film layering, perforations, and fabric integration, the materials’ heat-managing performance did not suffer. In addition to the possible applications for the fabric, the manufacturing process the team used to develop the fabric is also full of potential. “The strategies used for endowing our materials with breathability, washability, and fabric compatibility could be translated to several other types of wearable systems, such as washable organic electronics, stretchable e-textiles, and energy-harvesting triboelectric materials,” said Gorodetsky.
Source: https://www.sciencedaily.com/releases/2024/10/241001114730.htm
Discovery could lead to longer-lasting EV batteries, hasten energy transition
Batteries lose capacity over time, which is why older cellphones run out of power more quickly. This common phenomenon, however, is not completely understood. Now, an international team of researchers, led by an engineer at the University of Colorado Boulder, has revealed the underlying mechanism behind such battery degradation. Their discovery could help scientists to develop better batteries, which would allow electric vehicles to run farther and last longer, while also advancing energy storage technologies that would accelerate the transition to clean energy. Engineers have been working for years on designing lithium-ion batteries — the most common type of rechargeable batteries — without cobalt. Cobalt is an expensive rare mineral, and its mining process has been linked to grave environmental and human rights concerns. So far, scientists have tried to use other elements such as nickel and magnesium to replace cobalt in lithium-ion batteries. But these batteries have even higher rates of self-discharge, which is when the battery’s internal chemical reactions reduce stored energy and degrade its capacity over time. Because of self-discharge, most EV batteries have a lifespan of seven to 10 years before they need to be replaced. Researcher Toney, who is also a fellow of the Renewable and Sustainable Energy Institute, and his team set out to investigate the cause of self-discharge. In a typical lithium-ion battery, lithium ions, which carry charges, move from one side of the battery, called the anode, to the other side, called the cathode, through a medium called an electrolyte. During this process, the flow of these charged ions forms an electric current that powers electronic devices. Charging the battery reverses the flow of the charged ions and returns them to the anode. Previously, scientists thought batteries self-discharge because not all lithium ions return to the anode when charging, reducing the number of charged ions available to form the current and provide power. Using the Advanced Photon Source, a powerful X-ray machine, at the U.S. Department of Energy’s Argonne National Laboratory in Illinois, the research team discovered that hydrogen molecules from the battery’s electrolyte would move to cathode and take the spots that lithium ions normally bind to. As a result, lithium ions have fewer places to bind to on the cathode, weakening the electric current and decreasing the battery’s capacity. With a better understanding of the self-discharge mechanism, engineers can explore a few ways to prevent the process, such as coating the cathode with a special material to block hydrogen molecules or using a different electrolyte. “Now that we understand what is causing batteries to degrade, we can inform the battery chemistry community on what needs to be improved when designing in batteries,” Toney said.
Source: https://www.sciencedaily.com/releases/2024/09/240912142413.htm
Stronger together: miniature robots in convoy for endoscopic surgery
Miniature robots on the millimeter scale often lack the strength to transport instruments for endoscopic microsurgery through the body. Scientists at the German Cancer Research Center (DKFZ) are now combining several millimeter-sized TrainBots into one unit and equipping them with improved “feet.” For the first time, the DKFZ team was able to perform an electric surgical procedure on a bile duct obstruction experimentally with a robotic convoy. The list of conceivable applications for miniature robots in medicine is long: from targeted drug application to sensing tasks and surgical procedures. An arsenal of robots has already been developed and tested for this range of tasks, from the nanometer to the centimeter scale. However, the little helpers available today reach their limits in many tasks. For example, in endoscopic microsurgery. The required instruments are often too heavy for a single millimeter-sized robot to carry to its destination. Another common problem is that the robots often have to move by crawling. However, the surfaces of numerous body structures are covered with mucus on which the robots slip and cannot move. A team led by Tian Qiu at the DKFZ in Dresden has now developed a solution for both of these problems: their TrainBot connects several individual robots on the millimeter scale. The units are equipped with improved anti-slip feet. Together, they are able to transport an endoscopic instrument. The TrainBot unit works wireless; a rotating magnetic field simultaneously controls the individual units. The magnetic control enables movements in a plane with the control of rotation. The external actuation and control system is designed for the distances at the human body scale. The Dresden-based DKFZ researchers have already used their robot convoy of three TrainBot units to simulate a surgical procedure. In the case of bile duct cancer, the bile duct often becomes blocked, causing bile to back up, which is a very dangerous situation for those affected. In this case, the occlusion must be opened after an endoscopic diagnosis. To do this, a flexible endoscope is inserted through the mouth into the small intestine and from there into the bile duct. One of the major difficulties here is for the endoscope to navigate around the sharp angle from the small intestine into the bile duct. “This is where the flexible robot convoy can show its strengths,” says the project leader Tian Qiu. His team demonstrated it using organs removed from a pig. The robot convoy was able to maneuver an endoscopic instrument for electrical tissue ablation in the bile duct. Once the tip of the wire electrode arrives at the site, electrical voltage is applied and a tissue blockage is gradually removed electrically, a procedure known as “electrocauterization.” The wire electrode used was 25 cm long and three and a half times as heavy as a TrainBot unit. “Afterwards, for example, another TrainBot convoy can bring a catheter for fluidic drainage or drug delivery,” says researcher Moonkwang Jeong, “After the promising results with the TrainBots in the organ model, we are optimistic that we will be able to develop teams of miniature robots for further tasks in endoscopic surgery.”
Source: https://www.sciencedaily.com/releases/2024/10/241001114926.htm