Barry Haycock

I've owned both barryhaycock.com and bazhaycock.com for a while now, but never really used bazhaycock.com. The plan was to always use this for non-work stuff. So, over at barryhaycock.com, you can find my Ph.D. Thesis, and details about my day job. This page is a testground- somewhere I can play with ideas, post a few bits and pieces I'd like to talk about and generally play with different site tools. I built the current site using Webflow.com, after it was blogged over at Y-Combinator, please feel free to email with comments and thoughts.

The big thing about Webflow.com is that I can simultaneously develop pages that look great on a computer, tablet or smart-phone, so please email and let me know what you think. I have to consider this further.

521a710ba214c627530000d0_2012-07-01%2021.22.29.jpg
Robotics
Projects
521a755347e56c28530000f2_Code%20Snippets.jpg
Quick scripts and snippets
Code snippets
image-placeholder.svg
About
About

Projects

I sometimes get carried away with small electronics projects once in a while. I decided I should really put up images and discussions. The main projects here are the "Penguin Project", that I discuss a fair amount. There's also an overview of the voice-responsive eyes I built and a modification on the useless machine.

Unfortunately, I'll have to add videos here via upload to YouTube, so I'll work on that as time allows.

Ez-Robot controller, short review

Cheerful box. Well thought-out project. Overall a great kit. The kit was available from http://www.ez-robot.com/ and it's recently been updated with the release of new kits also- namely the "Revolution" line with out-of-the-box pre-built bots.

52233f1dedfc001e3900045a_2012-05-14%2018.27.44.jpg

Ez-Robot is the brainchild of one DJ Sures, who calls himself a "TechRockStar", a title that's well deserved. The idea behind his Ez-Robot Kit is that it's a quick and easy starter kit with a PLC controller pre-mounted on a board with integrated Bluetooth at it's heart. The rest of the kit includes a wireless webcam, 4 standard servos, 2 continuous-rotation servos, an ultrasonic rangefinder, wheels and instructions to download the controller software, which runs on Windows (I'm a Mac guy) and allows you to build programs for the controller via a GUI interface, or using .NET.
GUI please, I haven't a clue about programming in .NET

Just looking at the box, you can see the effort that has gone into this- having recently read Jobs' biography, I cannot help but notice that there is some Apple-esque design philosophy applied here, and I really love that.

What's inside this kit is exactly the opposite of the SparkFun Professional Arduino Kit I purchased a long time ago- And I'm not detracting from SparkFun / Arduino by saying that. The SparkFun Kit comes with everything you need to carry out the basic projects that are included with handy instructions. You are taken through a bunch of little projects to make lights flash, or respond to temperature. Everything is on a breadboard, and self-contained.

By comparison, the Ez-Robot kit doesn't come with "nerdy" things like a breadboard, or LEDs for that matter. I have some polished pre-made components chosen to work with each other and to respond to the computer via bluetooth or the input from the included camera (which comes with a USB dongle). This is a pack designed for someone with BIG projects in mind, who's going to add their own components and dream big. This is an interface that appears to require nothing more than the ability to use Windows to build programs. This is a completely different paradigm from Arduino. This is going to be COOL.

52233f6b306e381f39000340_2012-05-14%2018.28.03.jpg

The big deal for me is that I've wanted to get to the point of setting up a computer / Arduino combination that will do image and voice recognition. With the LabVIEW RIO, this is pretty simple- and the MARS high-school robotics team I mentor (who are all under 17) regularly do so. But a RIO is seriously expensive. The Ez-Robot kit came in at under $200. The next big deal is that camera- I cannot wait to start playing about with what I can do with that.

InstructablesDJ SuresEZ-Robot.com

Easter Egg Eyes

Following a wonderful Instructables page, decided to give their robotic eyes a shot one Sunday afternoon.

52228ac3c9e35509320007e8_2012-05-03%2023.00.20.jpg

Unfortunately, said Sunday afternoon was quite a bit after Easter, and therefore getting my hands on some plastic eggs proved to be a hassle. Rather than ending up with pristine white-coloured eyes as I would have liked, I ended up with red and green. However, I did get them to move and blink as I wanted and they sit proudly on a shelf now in my home.

The big reason for putting these eyes together wasn't so much to follow an Instructables page, that's available via the button below, and does a far better job of explaining the how-and-what than I can here. What I really wanted to do was play with the Ez-Robot kit I had just bought. So, starting with the kit and then showing the final result, which is both brilliant and will probably bring me nightmares for years to come.

Of course, putting together the Ez-Robot Controller and the easter egg eyes, has proved fun. The video is on the right here, where we can see that the voice response on the controller is pretty awesome. I'm really looking forward to using the controller for something more complex now.

Instructables

Penguin-related projects

In an effort to think of ideas of what to actually do with the Ez-Robot kit, aside from voice-controlled eyes, I picked up this adorable penguin toy and hatched a plan. Its a toy of a character in Ice Age 2 (I believe). I would post the videos of it's heart-warming little songs, it's awkward and adorable dance and how it pretends to know what you're saying and says interesting things back. Every one of those is kind of fun. I have another video of this guy dancing and giggling in the path of my Roomba, only the Roomba and itself almost appear to see a kindred spirit in one another, it'd warm the coldest of hearts, so it would.

However, due to the fact that I'm about to discuss pulling the toy's head apart, in order to take the first step in combining it with servos, controllers, elastic bands, and the like and effectively beginning it's inevitable reincarnation as Frankenstein's penguin, I feel it prudent to not post said videos. I might end up putting the Roomba Vs Penguin video up later, it's kinda funny.

So, here he is, fluffy and happy-eyed:

52238d1c306e381f39000771_Lone%20Penguin.png

And, I'm going to tell the story, for better or worse, about how I made his little head look like this:




... which, I'm the first to admit is a little creepy, but it's part of a process! I'm working towards something awesome. I want to eventually get this to a point where I can make a walking robot that can respond to voice commands.

So, the first thing was to set about pulling the toy open. I have to admit opening this guy was a learning experience and a half. If you've ever seen this toy (and again, I'm afraid of the backlash if I post videos), it dances to its own music and it sort of blinks as it's talking. But once you get that soft, fluffy exterior off it the innards are very industrial-chic. What I found was an endoskeleton of hard plastic, springs where the arms should be, a control board and a single motor. For me, that single motor was a major surprise- the legs, arms, eyes, beak and head rotation are all driven from one spinning motor and one far larger gear-box.
Update: I've since repurposed that gear-box and motor assembly to build a "Useless Machine" (web-search it) with a button on the front rather than on the lid, details below soon.

Actually, the simplicity of the inside of this toy must be commented on further. I really was impressed. The original toy had a "spine" with a disk at the end that randomly made the eyes and beak open and close simultaneously due to two "nubbins" on that disk that pushed on levers that operated the eyes/beak opening and closing.

525722bbaf36c6cc61000200_2012-05-21%2019.47.29.jpg 5257236693d10a250800022c_2012-05-21%2019.51.58.jpg

Once the soft felt was removed about the torso, there was the issue with the size of the toy's feet. What was required here was to not actually cut or damage the outer covering of it because I might need it later. The feet are particularly huge (for balance as much as "cuteness", I reckon). You'll see the full inners in a second and the one oscillating bar that runs through the toy, which means that it needs a large base so as it doesn't fall over when it's dancing.

5257253caf36c6cc61000204_2012-05-21%2019.59.57.jpg 5257258993d10a250800022f_2012-05-21%2020.00.06.jpg


But the inside this device is a work of art, well an "economy of art" mayber! As I've already said, it's just wonderfully simple. Here's the front and back, and you can see how stunted the legs are, how the only large parts are the housing, which contains a motor and very little else and a big speaker on the front, there's nothing else in here.

5257266a9a047b2708000282_2012-05-21%2020.02.12.jpg 525726839a047b2708000283_2012-05-29%2022.20.16.jpg

Under the panel on the back is a simple, small, effective circuit board. That little black circle there is basically the controller- the "brains" of the entire device. This circuit board is the only thing I will probably not reuse in this or some future project.
The image on the left here shows the arms- these amazed me also, the toy would move his arms (wings?) up and down as he danced. They're just mass-produced springs under some cloth! I don't know about you, but the idea that something so simple looks so "cartoon-realistic" in action is cool.

52572c87af36c6cc61000233_2012-05-29%2022.07.29.jpg 52572c9e7fb638cf6100021e_2012-05-29%2022.07.33.jpg

In this pair of pictures, the internal plastic body has been opened. We can see the way that everything is tied together and everything, absolutely everything within the device is dependent on the motor on the right hand side of the body. The second image here shows the motor, which is attached to a gear box with a single rotary output, which then flicks the lever you can see attached to the arm-springs, shakes the leg-struts and spins a bar that goes up into the penguin head.

Truthfully, one of the hardest parts was opening the head of the penguin toy without actually breaking it. I wanted to get in there without breaking anything- and make both operate independent of one another and controlled by the computer. Pro-tip: the screws to hold the skull together were buried under the felt, it took ages to find them, but I'm very glad I didn't resort to using a saw to cut the entire enclosure open.

Patience is a virtue, I suppose.

I tried to photo the hidden screw-holes, once I found them. Unfortunately, they were indiscernible from the background in any photo I took, so I the best I can do is use the silver screwdriver as a kind of pointer in the image on the right.

Without further ado, here's a montage of opening up the head:

522392cb7a9c413f10000959_2012-06-06%2023.43.44.jpg
52239a3b7a9c413f10000989_2012-06-11%2020.56.36.jpg
52239a6fedfc001e390007b7_2012-06-11%2021.00.53.jpg
52239a82edfc001e390007b8_2012-06-11%2021.00.58.jpg
52239a97edfc001e390007b9_2012-06-11%2021.01.02.jpg
5223a017edfc001e390007e3_2012-06-11%2021.01.10.jpg
5223a056306e381f390008b8_2012-06-11%2021.15.51.jpg
5223a07f306e381f390008b9_2012-06-11%2021.15.55.jpg
5223a0e8306e381f390008be_2012-06-11%2021.15.56.jpg
5223a11b306e381f390008bf_2012-06-11%2021.16.02.jpg
5223a130e86bfb3e10000788_2012-06-11%2021.16.45.jpg
5223a2077a9c413f100009d2_2012-06-11%2021.16.51.jpg
5223a2227a9c413f100009d3_2012-06-11%2021.16.54.jpg

That last picture reminds me of Vanilla Sky. Again, I'm amazed by how simple but effective the mechanisms are for animating the device. In the first picture, you can see that "spine" entering the head that I've mentioned before. In the end, there's a metal bar that bumps a lever that moves the eyes and the bottom half of the beak simultaneously.

If you look through the back of the face photos, you can see the way the eyes are mounted. I was surprised to learn that the beak and the eyes were coupled via a lever mechanism- for my purposes this was going to have to be changed.

Now I've got everything out, stripped down and (thankfully) nothing damaged. It is time to build!

5223a25b7a9c413f100009d6_2012-07-01%2020.13.51.jpg

So, I want a head that I can fully control- eventually I want to be able to move the mouth and the eyes in sync with some playing audio and have a penguin that appears to talk. What I failed to realise at this point was that I have absolutely no idea what I am doing whatsoever. I have this idealized image of what I want to do in my head, I have this dismembered toy all over the floor and I haven't a clue what I'm going to do to turn the latter into the former.

My first real problem here is the independence of the eyes and beak. What I thought I needed was to get servos (which are big) to fit in the cavity of the head (which is small) and somehow attach the servo arm to the levers or the parts already in the head. I played with a bunch of configurations, dutifully prototyped and wall-tacked parts into place and checked to see if they would work. I spent quite a few nights on this, and no I won't tell you how many.

Eventually, I had a brainwave- and I'll admit, this should have occurred to me a lot sooner. My idea, my fiendishly perfect plan, my great solution was so obvious that I should have done this first. My idea was to call my friend Viking, who's a professional puppeteer, and would actually have some notion on how to animate a head. Why I didn't think of this before is beyond me.

You've got to love professionals, in any game, people who have made what is complex to the rest of us mundane and obvious to them in one small niche of the human experience. Viking's suggestion that I hadn't thought of was elegantly simple. "Barry", he said, "What I would do with that is put an elastic band in there, holding the eyes closed, then run a a bit of wire down from that and you can put the servos anywhere you like. Do the same with the beak."

This picture shows the elastic holding the eyes closed. It also shows where the wire will attach to open them. Anytime I need to remember that no matter what I do in life, there will always be new things to learn, I look at this picture.

For the record, Viking's real name is Matthew Laird. Not only is he a master puppeteer with an impressive portfolio and a cutting whit, he's a good friend. He considers his current occupation as "corporate" and, as such, no longer keeps an online resume. So... after pleading with him to put stuff up so as I can link his work as a credit, he asked that for the minute I link his Facebook, where he keeps a limited collection of his work.

He has cheerfully agreed that if any budding roboticist has any questions about animation then he'd be more than happy to reply to messages to him on his Facebook.

Matthew "Viking" Laird's Facebook page. 525966afe49a2aba140002af_2012-07-01%2020.38.27.jpg 52596631e11bc1bb14000309_2012-07-01%2020.38.19.jpg

With Viking's advice, I was able to work out how to make the head a "stand alone" device, and therefore I can modularize an entire project into a series of small projects- namely the head is separate from an eventual body, etc.
I cut a piece of poster-board (my absolute go-to for prototyping anything) into a neat circle that could be hot-glued to the base of the head, and house the servos that I needed. In this way, I have the ability to mount the head on a gimbol, if necessary, or just pop it down on a desk, as in the video above.

The clearest way to see everything, where I've attached the servos using picture wire to the original control levels inside the head is to use another photo montage. What you can see below is the base-board with the servos attached. The servos are then pulling on wires which operate contra to elastic (closing the eyes) or a spring (which closes the beak). Then the entire ensemble is closed up, leaving only the trailing cables for my two servos, which I can interface with via the control board.

5259688fe11bc1bb1400030e_2012-07-01%2021.03.27.jpg
525968a1e11bc1bb14000310_2012-07-01%2021.03.29.jpg
525968b0e11bc1bb14000312_2012-07-01%2021.03.35.jpg
525968bebaf9c666320002d8_2012-07-01%2021.03.41.jpg
525968ccbaf9c666320002d9_2012-07-01%2021.03.43.jpg
525968b0e11bc1bb14000312_2012-07-01%2021.03.35.jpg
525968bebaf9c666320002d8_2012-07-01%2021.03.41.jpg
525968ccbaf9c666320002d9_2012-07-01%2021.03.43.jpg
5259692dbaf9c666320002da_2012-07-01%2021.18.02.jpg
5259693ee11bc1bb14000313_2012-07-01%2021.18.04.jpg
52596957e49a2aba140002b0_2012-07-01%2021.18.06.jpg
52596974baf9c666320002db_2012-07-01%2021.18.09.jpg
5259698a8f79346732000177_2012-07-01%2021.18.10.jpg
525969d18f79346732000178_2012-07-01%2021.22.40.jpg
525969debaf9c666320002dc_2012-07-01%2021.22.35.jpg
521a710ba214c627530000d0_2012-07-01%2021.22.29.jpg

I feel that the images above explain the process as clearly as I can. However, if anyone has any questions, email me anytime with the button on the left.

I've tried to summarize the operation here in this video. This marks the end of my "Penguin Head" project, The video of it's motion is above, where I mounted it on a servo for effect. All in all, this is a little rough-around-the-edges, but I'm proud of what I've built so far. For the moment, I'll be tweaking the controller code and will post videos on that in the future.

How do legs actually work?

The title of this post isn't meant to be deep, or ironic. It's entirely truthful. Once I got the head up and running for my Penguin (see post above), I needed to tackle another couple of fundamentals of an animatronic device- namely the arms and the legs. Arms seem like a simple enough idea, they go up and down... I can think about that later. But legs are a different matter.

If you think about it, the easiest and simplest idea would be to hide wheels under a chassis and call it a day. But I wanted something a little more subtle. Let me be quite clear here. I do not want, nor do I have the expertise, to emulate Asimo or any of the number of walking robots that have come out since, like the Nao or the new Ez-Robot kit. This is what I've discovered and developed so far that's an easy-to-copy method for a simple set of legs for a robot. Also, this is a confession that I cheated a little. I bought one of these toys:

525971d3baf9c66632000319_51OgDzkbrbL.jpgDisc Shooting Mr. Robot, Amazon.com

There were a couple of reasons for this purchase. Mainly, I could see from the videos that this robot "walked" in a way that I could take advantage of, and secondly, the motion of the head in the online videos suggested a mechanism I could copy further down the line once I build a torso for the Penguin-bot.

By comparison to the penguin toy above, opening up Mr. Robot was trivial. There were no hidden clips, I didn't have to remove a furry skin. This toy was cheap-and-cheerful by comparison:

525977eae11bc1bb14000348_2012-09-02%2017.37.51.jpg
525977f5e11bc1bb14000349_2012-09-02%2017.37.54.jpg
52597833e49a2aba140002d9_2012-09-02%2017.38.05.jpg
52597858e49a2aba140002da_2012-09-02%2018.00.14.jpg

One thing to be said for this kind of mass-produced non-brand-name kind of toy is that the construction is entirely modular- as a result, there's not a huge amount to be said. The legs are attached to the torso by a couple of screws, and wires run to motors in each foot. So removal was simple. As an aside, I will mention that these legs are far too big to fit in the original Penguin toy's covering, but I'll deal with that later:

525979a98f793467320001c2_2012-09-02%2018.27.02.jpg

That aside, my first and foremost question was how the legs worked, how the motion was made to look a bit life-like, but were still simply wheels and therefore the toy doesn't have to deal with issues like balance. The video below explains this operation:


So, that's my first question answered!

What follows here is how I attacked the issue of being able to address the issue of controlling these legs from a board like the Ez-Board or the Arduino. It would be trivial, if you look at the above images, to simply attach the cables from the motors to an analog out on a control board and drive the motors- or so you would think!

The issue with that approach is one of power. If you think about it, you typically drive an Arduino with a 3.6V - 6.0V supply, which is perfect for most applications, but in order to actually drive the weight I expect to have on these legs, I needed something more robust. A servo is a motor with built-in gears, independent power supply, and already set up to accept signals from a controller, so I need to rig a couple of those into the legs.

In my Ez-B kit, that I discuss above, there are two continuous-rotation servos. If you don't know what these are, it's simple. Basically your "standard" servo only has a limited range of motion, typically about 270 degrees. The signal that your controller sends to it is a pulse of power that cycles a bunch of times a second- basically, a "square wave". The frequency of this pulse, that is the amount of square waves per second, is interpreted by the servo as a signal on where to point. These pulses-as-a-signal are called "Pulse Width Modulation" (PWM) and are a standard in instrumentation applications. Long story short, this is great for subtle control, but useless for an application like wheels. A continuous-rotation servo (sometimes called "modified servo") takes the same pulse-like signal, but interprets it as rotational velocity, rather than as how far to rotate. So this is what I need for my purposes.

My next step is to crack out the rotary tool, glue gun, and to experiment liberally. I decided I want to keep the general abilities of these legs, namely how they move in a "life-like" way, but I needed more control and more power than the little 1.5V motors inside could muster. Here comes the photos:

5259cfd621ee25863a000132_2012-09-02%2018.27.31.jpg
5259cfe6d601fb853a0001ca_2012-09-02%2020.05.00.jpg
5259cff8d601fb853a0001cb_2012-09-02%2020.05.04.jpg
5259d01621ee25863a000133_2012-09-02%2020.05.06.jpg
5259d02721ee25863a000134_photo%201.JPG
5259d037d601fb853a0001cc_photo%202.JPG
5259d043d601fb853a0001cd_photo%203.JPG
5259d04fd601fb853a0001cf_photo%204.JPG
5259d07a21ee25863a00014d_photo%201.JPG
5259d08221ee25863a00014e_photo%202.JPG
5259d0bd21ee25863a00014f_photo%201.JPG
5259d0c84d3a69f7180001ec_photo%202.JPG
5259d0d84d3a69f7180001ed_photo%203.JPG
5259d10a4d3a69f7180001ef_photo%202.JPG
5259d0c84d3a69f7180001ec_photo%202.JPG
5259d1244d3a69f7180001f1_photo%204.JPG

So, I think this build is pretty explanatory, the main objective was to explain how the motion of the legs work. essentially, I replaced the motors and gears within the legs with servos to allow for more power, and better control over the motion. The servos are also easier to interface with via my control board, with less chance of burning out something on the board if they draw too much power (for example, if a motor was fighting a large load).

5259d08cd601fb853a0001d1_photo%203.JPG

At this point, this is what I have. A head and a pair of legs.

My next step is to build a torso, which will be part of a future post. In the meantime, some of the parts I had left over became part of a "useless machine", which I'll post about in the near future.

About me

This is a candidate essay for barryhaycock.com, but I'm pre-posting here because it sums up a lot about my work.

My day job mainly focusses on applying high-throughput computational techniques to the study of materials at the nano-scale. This work has grown quite naturally out of what I did for my Ph.D. and has grown exponentially since then.

Given that current silicon-chip based technology has hit the wall in terms of how fast the clock speed can be; silicon carbide is a viable next-generation material to make computer chips out of- it can switch faster than silicon. As part of my Ph.D. research, I began studying a very interesting phenomenon of the surface of Silicon Carbide- a feature called a “Mott-Hubbard Transition” on the surface of this material. Without going into too much detail, a Mott-Hubbard Transition simply means that although theoretically, this surface shouldn’t conduct electricity, it actually does in-situ. Mott and Hubbard explained in 1929 that this is due to the electrons pushing away from each other, creating a path for power to flow in the process (electrons keep a distance from one another, much like two alike poles on a magnet). MH transitions have been seen to occur in bulk materials, but this is a rare case of such an effect on the surface of a material, which seemed curious to me.

At the time, I was already working to help develop a package in FORTRAN that had been highly successful in carrying out really efficient studies of nanoscale systems. I had implemented new algorithms for integration, using recursion, which is a feature of newer versions of FORTRAN and was not available when the package was originally written. This had the effect of making the code even more efficient and allowed for very long simulations of changes in materials or many thousands of simulations within an acceptable amount of computer time. I had also implemented newer ways to simulate electrons- known as the “Exact Exchange” method. When I simulated the silicon carbide surface, I noticed one thing that hadn’t been discussed in the literature at all. Namely, the silicon atoms on the surface were bouncing up and down in unison. For every surface atom that went up, it’s local neighbors went down and vice-versa, very much like an array of coupled see-saws. The really cool thing is that this was happening faster than any current experimental technique could possibly measure. By applying some basic statistical analysis to a huge number of simulations of this system, I was able to show that there are two basic states that have a ground energy that’s almost the same and that there is a small barrier between them- indeed a see-saw motion. I was also able to show that the energies of the electrons when up-or-down like this look exactly like what is observed in a Mott-Hubbard transition. Essentially, the energies of the electrons are because of a motion that nobody had seen before- not due to a highly exotic phenomena that usually does not occur on the surface of materials. A really easy way to explain this is included in my Ph.D. defense, but if you’ve ever seen a video like this one where the helicopter is clearly flying but we cannot see the motion of the propellers, but we know it is there. In this video, it is because the camera basically takes 24 still images a second, which is played back and we normally see as motion, but because the propellers are in the same spot each time an image is taken- it appears to be remaining still. Precisely the same thing is happening on the silicon carbide surface- there is motion we cannot see and that leads to the electrons “flying” around just like the helicopter.

Following a similar vein, much of my current research at the Post-Doctoral level uses high throughput calculations and applies them to the study of nanoscale materials. I pay particular attention to developing tools and techniques that the undergrads assigned to me can make use of to study more materials again. In this regime, I have assisted one undergraduate in generating an ensemble of calculations to work out the statistical change of an electron jumping in benzene- and his results have been nearly exactly what is seen in experiment. I have also supported a graduate student in Rochester University with a similar study on a more complex system; both of these research projects were initially started by myself and then handed over to the young researchers.

A central focus of my current research is the study of “Delafossite” materials. Delafossites are a class of materials that contain two metal atoms and two oxygen atoms. They are of particular interest because it is possible to tailor the electronic applications of the material with the addition of impurities by alloying. In this way, it has been shown that CuGaO­2, a popular delafossite material, can be made to absorb visible light. What CuGaO2 can also do is be used in the photocatalysis of CO2. What all of this means is that with the addition of iron to CuGaO­2, it can be made to convert CO2 from manufacturing chimney stacks and convert it into methanol, which can be used as car fuel. And it can do this powered solely by sunlight!

The issue with computationally studying this alloying of iron in CuGaO2 is that it requires low levels of iron added to the material in the same spot as the Ga atoms. This means that we need to simulate very large numbers of atoms in our calculations in order to achieve a low iron to gallium ratio. Compounding the problem is the fact that it is very difficult to know where the iron atoms need to be relative to one another when these materials are synthesized in laboratory conditions. For these reasons we use a computational code that’s extremely efficient in order to be able to simulate large numbers of atoms. We use high-throughput techniques in order to simulate thousands of possible atomic configurations. Finally, we apply statistical analysis on the computational results in order to determine the most likely configurations of the iron atoms in the material before probing the more complex properties of the simulated material. We carry out such studies for iron to gallium ratio’s from 0.01 through 1.00, where the iron occupies randomly selected gallium sites within the crystal structure of CuGaO2.

A large part of my research is currently applied to my development of a whole computational system to calculate the interesting properties of delafossite materials. In-house this project is called “DelafossSETI” after the SETI project. Delafossites are a type of metal oxide that appear to defy theory in terms of their behavior- but even more compelling than that is the fact that one of my collaborators in NETL has been able to show that a certain delafossite may be able to convert carbon dioxide, a greenhouse gas, into methanol (which can be used as car fuel) and it could do this powered only by sunlight. In order to achieve this groundbreaking result, a very distinct “recipe” must be followed using a mixture of different kinds of delafossites. The result is groundbreaking and more than a little surprising. It also poses a very unique problem to study with computational techniques as most computer-simulation methods only simulate a few tens of atoms, but if you are mixing two materials together then you need many more atoms in your simulation. In this case, we are studying a ratio of 1:20 down to 1:100 in a material that contains hundreds of atoms.

Compounding the problem is the fact that there is no “ideal” way that this mixture combines. Experimentally, these materials are just mixed together and nature works out where everything should be. This is impossible computationally, and we have to set up initial conditions of how the mixtures of materials are combined and calculate the energies to find the most likely configurations. What we do is generate thousands of simulations, with random positioning of the constituents and use this high-throughput calculation to see trends in positions of atoms and elements relative to one another and therefore work out what nature already knows- the best way to fit together these constituent parts. Once we have that information, we can begin to probe the new material for how it does what it does, and why.