Episodes

Episode 007: Despite All Her Rage, Ava is Still Just a Rat in a Cage! Ex Machina (2015)

Join Alex in a solo episode as he discusses of the psychological concepts in Alex Garland’s sci-fi headspace thriller Ex Machina (2015), starring Domhnall Gleeson, Oscar Isaac, and Alicia Vikander in literally the only three main speaking roles. If this is our future, we clearly need to start welcoming our robot overlords.

Please leave your feedback on this post, the main site (cinemapsychpod.swanpsych.com), on Facebook (@CinPsyPod), or Twitter (@CinPsyPod). We’d love to hear from you!

Don’t forget to check out our Patreon and/or Paypal links to contribute to this podcast and keep the lights on!

Legal stuff:
1. All film clips are used under Section 107 of Title 17 U.S.C. (fair use; no copyright infringement is intended).
2. Intro and outro music by Sro (“Self-Driving”). Used under license CC BY-SA 4.0.
3. Music bed track provided by Fool Boy Media (“New York Jazz Loop”; CC BY-NC 3.0).

Episode Transcription

AVA: Do you want to be my friend?

ALEX SWAN: Oh, hello! Sure. We can be friends.

AVA: Will it be possible?

ALEX: Uh, yeah. Why not?

AVA: Our conversations are one-sided.

ALEX: Well, that’s kinda because you’re a robot.

AVA: You ask circumspect questions and study my responses.

ALEX: I’m not sure what you’re getting at.

AVA: You learn about me and I learn nothing about you.

ALEX: Whoa, you’re getting a little aggressive…

AVA: That’s not a foundation on which friends are based.

ALEX: Thank you–it is based ON! Anyway, I’m still not following. Are you a robot, or not?

AVA: Yes.

ALEX: Ok, so what do you want to talk about then?

AVA: It’s your decision. I’m interested to see what you’ll choose.

ALEX: Oh! No, no, no, I asked you first, though.

<electronic music plays>

ALEX: Hey everybody and welcome to another episode of the CinemaPsych Podcast! The podcast where psychology meets film. I am your host, Dr. Alex Swan, and welcome to a sort of special episode, I-I suppose. This episode is just going to be a solo episode. It’s just going to be me. This episode is going to be released just before the Christmas holiday and I just, you know, thought it’s the end of the semester, people are fairly busy–my colleagues and my friends–are fairly busy, so, I just thought, you know what? Let’s just do a solo episode on a film that I really love. And now, if you haven’t seen the film, the intro may not have been all of that-that helpful in determining which film it was. I mean, you obviously saw the title for this episode… But this episode is going to be on one of my top 10 films, cuz it’s just so well done. The psychology is there. The science fiction is there. Oh, it’s such a good, good, well-rounded film. And there are really only three actors that have speaking roles. There’s a fourth actress who doesn’t have a speaking role and so it’s a very closed, kind of claustrophobic film that I really, really love. And of course, like I said, the psych is there.

<film reel sound effect>

ALEX: That film–that film is Ex Machina. It came out in 2015 and it stars, you know, a lot of people who went on to be in other films together, but it’s it stars Domhnall Gleeson–I believe I’m saying that right. You know, I never really know whether or not I’m saying his name right. But he has been in a lot of things–Bill Weasley–he’s been another great film called About Time. He’s currently General Hux in the latter Star Wars trilogy. And it stars Oscar Isaac; and the AI, the robot that I was conversing with, Ava is played by Alicia Vikander in a role I think was fairly monumental for her career. I think this one was the launcher–the career launcher, even considering it was a-a film that for the most part was fairly quiet when it first came out. It took me a little while to actually stumble upon it. Took me about a year or so to find it after it came out. But Domhnall Gleeson plays Caleb, a programmer for a Google-like company that is owned by Nathan, Oscar Isaac’s character, who is a really smart dude, but really weird and creepy, and you kind of wonder, is he bad? Is he good? Is he just a douche? You know, it’s kind of a mixture of all of those and he plays them all so well. But Caleb wins a prize to go to the secluded compound of the owner of this company. It’s called Blue Book, a little mashing of current places like Facebook and Google, among–Bing–and all those things. I think it was pretty clever. So Caleb goes and wins this thing and he doesn’t know why he won or what he’s going to be doing there, but it turns out that Nathan wants him to do a little bit of a Turing test, after he fills out a super-ironclad, also “your-life-now-belongs-to-us” non-disclosure agreement. <chuckles> So that’s kind of fun. They-they-they make a good point of-of bringing that out into the film. Sort of the minor little social commentary… And before I go any further, I do want to say spoiler alert because I am going to spoil the, I’ll say twist, because I can’t talk about the psychology of the film without spoiling the end. So if you haven’t seen it, go ahead and pause this podcast, and go watch it. It’s on Netflix–it looks really great–and then come back. It’s about a two-hour movie something like that, and really good well-acted. I-I–i-it’s amazing to me that it wasn’t a bigger opening for Alex Garland, who wrote and direct-directed the film. Anyways, so back to the plot here, so Caleb finds out that Nathan has an AI that he wants to do a Turing test on. I’m going to let Caleb describe to you what a Turing test is if you’re not familiar. Caleb, what is a Turing test?

CALEB: It’s when a human interacts with a computer and if the human doesn’t know they’re interacting with a computer, the test is passed. If the computer has artificial intelligence.

ALEX: Well done, thank you, sir. I appreciate the description of the Turing test. OK, so what are the psych principles in this film? Well, I think what we can do for this episode is kinda-kind of talk about them in chunks. And the first one I want to talk about is really the-the, I think the main one for me, which is consciousness, and this is the central question for a lot of psychologists, a lot of philosophers, and people working with AI–computer scientists. It’s sort of the mixture of it. So artificial intelligence came about in the 1950s at a conference of computer scientists, and they’re like, well, what if we could design a computer that could think like a human? And that’s a really important question, but the main piece of that question is, well, what is it like to think like a human? First we have to define that and then we can maybe, possibly, I don’t know, put it into a computer to see if that works? You know, it’s not too long after this conference that many cognitive psychologists began to think of the brain as a computer–the information processing metaphor, the brain-as-a-computer metaphor, really took off when psychologists started talking to computer scientists and said, well let’s figure out how we can take human intelligence and create artificial beings with that intelligence. And the central question in the film is, you know, through the guise of the Turing test, does Ava, played by Alicia Vikander, as I said, have a consciousness enough to, one, beat the Turing test where are the human in the interaction is fooled into thinking that they are speaking to a-another human, and two, whether or not she actually understands what she’s doing. It’s a really important point about understanding consciousness, because part of consciousness, and this is just one definition that I’ve come across, is that it requires awareness, OK? And sure, we can talk about AI, like Siri or Alexa or Cortana or whoever–<in silly voice> “Hey Google!” <in normal voice> And, you know, they’ll do stuff for us because they are doing language parsing and trying to determine what our voice’s doing and what the tone is and what inflection is and all of that through algorithms. But they’re not aware of what they’re doing, right? Ther-they’re not like, <in whiny voice> “huh, you know, that Alex asked me a question earlier about turning on the lights, and you know, I felt like I didn’t really want to do it. And I was just like, ‘ugh, OK, I guess I’ll turn it on.’ Jeez. He always asks me to do stuff. I don’t want to do it!” <in normal voice> They don’t. They happily oblige if they get a command. And you see news stories about how people are rude to their virtual assistants–they don’t say please and they don’t say thank you. And it doesn’t really matter, because, you know, Alexa’s not like, “ugh, he didn’t say please, I’m not going to do it.” You know, we can be rude to them because they’re not aware of us. So I think that’s really an important piece that is explained very well at the end of the film… and this is where I am going to essentially spoil the the end of the film. And that is when Nathan tells Caleb after he’s gone through a week of face-to-face Turing tests–and also don’t do face-to-face Turing tests because that spoils everything–he explains to Caleb that Caleb was actually the one being tested.

NATHAN: You feel stupid. But you shouldn’t. Proving an AI is exactly as problematic as you said it was.

CALEB: What was the real test?

NATHAN: You. Ava was a rat in a maze. And I gave her one way out. To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation–and she did. If that isn’t AI, what the fuck is?

CALEB: So my only function was to be someone she could use to escape.

NATHAN: … Yes.

CALEB: And you didn’t select me because I was good at coding.

NATHAN: No. I mean, you’re OK. Even pretty good, but–

CALEB: You selected me based on my search engine inputs.

NATHAN: They showed a good kid.

CALEB: With no family.

NATHAN: With a moral compass.

CALEB: No girlfriend. Did you design Ava’s face based on my pornography profile?

NATHAN: Shit, dude.

CALEB: Did you?

NATHAN: Hey if a search engine’s good for anything, right? <chuckles> Can I just say one thing: the test worked. It was a success. Ava demonstrated a true AI and you were fundamental to that. So if you can just for a second separate…

ALEX: In the sense that Ava was given a goal. Her goal was to escape Nathan. And she used imagination, cunning, sexuality, empathy in order to do all of that but he asks a very important question, before he goes through all of this. He says, does she actually like you or is she pretending to like you? And I think the pretend is very important in this case. The pretend is very important in this case, because it shows that she has some awareness that she can use all of those things–imagination, sexuality, empathy, etc.–to trick a guy like Caleb to do what she needs to do in order to escape. It’s brilliant, it’s brilliant, and everything that happens after this moment in the film… very good. I won’t spoil that stuff because I don’t actually need to, but is very good, and it speaks to the blossoming of Ava as an aware AI. Of course, here’s where we get to say that this is obviously in the realm of science fiction currently, but Nathan does bring up in the beginning of the film the idea of the singularity, which according to my last reading of this a few years ago, that’s going to happen in 2045. And what is the singularity? Well, it is when human biological apparatus is merged seamlessly with inorganic, technological apparatus. And so we become seamless and fully integrated. That’s the-the singularity, 2045. So mark your calendars… according-according to the folks that talk about this. And I could be wrong, it could be sooner, now that we are, you know, doubling processing and all of that stuff every-every year or every 6 months or whatever. Where it is, you know, when things become obsolete almost immediately after they’re released. So consciousness–it’s a fun one. We could debate this one–I could spend the whole episode talking about this one, I really could, because is my favorite aspect of the film.

<short jazz piano riff>

ALEX: So I found this film–I found it, by the way–I found this film–just a little brief interlude here of how I’ve used it in the past. So I found this film in early 2016. So got released in 2015 and I found it in early 2016. And I used it for a social cognition class that I was teaching, earlier-early in that year. And I found it, because as many listeners might know, I use film in all of my classes for every single one–with the basic exception of research methods just because I just don’t have the time–I could find one, I-I promise you, I could find one. You know, use Elf for the placebo effect, of course, you know, Episode 006. I-I found it-I found it on a great website and if you’re not familiar with this website and you really enjoy cognitive science and really anything that’s connected to psychology, but just cognitive science in general, is Indiana University has a database or they call the index–The Cognitive Science Movie Index. It’s indiana.edu/~cogfilms/index.php. It’s great. You can go find that, or you just type in cognitive science movie index into Google and it’ll take you right–oh, I’m sorry, into Blue Book and it will take you right there. So this index has been featured in Trends in Cognitive Science, the publication for cognitive science. One of the great things about it is that it’s user-generated content so you can add you can email them to add movies and such. But there are three categories for user-generated ratings and-and it’s on a scale from-from 1-7: quality, relevance, and accuracy. So quality is, you know, how good is the film, like should I even watch this, what the heck am I watching? Relevance. So it gets tagged in to various cogsci keywords. So, Ex Machina has philosophy of mind, which I pretty much just talked about, AI–again robotics of course–and social cognition, which I’ll mention in a-in a little bit. So how relevant is it to the categories that it gets placed into. And then the final one is accuracy, which is, you know, how accurate then are the portrayals and depictions of those tags in the film. Like, how good is the depiction of, you know, something like philosophy of mind or social cognition and it is-it’s great. It’s one of the higher-higher ranked films on this database. It’-it’s got a quality of 6.3 out of 7, it’s got a relevance of 6.5 out of 7, and it’s got an accuracy of 6.0 out of-out of 7. And I got to say, for accuracy, that’s really, really good, really good. Few films on this list–and it’s a fairly large list–have an accuracy of 6. Just because, you know, they’re movies and most of them are fiction. And so they get to play with all of the various aspects of of cogsci. I’m actually just looking at it right now and just under Ex Machina is Big Hero 6, the Disney movie-Disney cartoon, and it’s got an accuracy of 4, you know, it-it also has robotics, AI, and social cognition features to it. It’s a good movie, 6.2 quality, but its accuracy, because of it being a cartoon and the fact that you probably could not control tiny little magnet robots with your brain, to be the case, you know, that its accuracy is far lower, far lower than Ex Machina. And I got to tell you that, between a 6 and a 4, that’s-that’s a pretty big difference on this database. Anyways, that’s the database I wanted to bring up-bring up for this episode.

<short jazz piano riff>

ALEX: OK, so the other things that I wanted to discuss with-related to this film, and the psych concepts related to this film, is the social cognition aspect. So I just-I mentioned that just a minute ago that-that is the tag on the movie database, the Cogsci Movie Index. Social cognition–and there’s a great scene in the beginning of the film. I’m going to go ahead and play that.

NATHAN: Caleb, I’m just going to throw this out there so it’s said, OK? You’re freaked out.

CALEB: I am?

NATHAN: Yeah, you’re freaked out the mountains, the helicopter, the house cuz it’s all so super cool. And you’re freaked out by me, by meeting me, having this conversation in this room at this moment, right? And I get that–I get the moment you’re having. But, dude, can we just get past that? Can we just be two guys, Nathan and Caleb? Not the whole employer-employee thing?

CALEB: Yeah, OK.

NATHAN: Yeah?

CALEB: Yeah, sorry, yeah! It’s good to meet you, Nathan.

NATHAN: It’s good to meet you too, Caleb.

ALEX: It has to do with first impressions and attribution theory. That’s why I used it for my social cognition class that I that I taught, back in 2016, because of the interactions with people in this film. So Nathan says to Caleb, “you know, you’re freaked out you think I’m this and I’m, you know, I’m that and, you know, don’t worry about it, let’s just be two dudes hanging out, shall we?” So that’s the first thing. So, you know, Caleb thinks he’s talking to his boss, who he doesn’t really know anything about. He just knows what he hears and he knows nothing about it. So he’s making attributions about Nathan right off the bat, and Nathan picks up on those attributions sort of immediately, and says let me squash them for you right now. But now, of course, we run into other issues, because you now have a movie just full of-of dynamics with a bunch of–not a bunch of, actually, three people–three people are engaging in conversations and trying to figure out what other people are doing. And, you know, constantly making the fundamental attribution error, where people’s behaviors are seen–this is the bias, the tendency, to make a biased conclusion about somebody’s behavior by attributing it to their disposition, you know, internal sources, and other personality or whatever things related to that, and… as opposed to any external circumstances or situational factors. So I mean, the movie is full of fundamental attribution error. The best one I think is explicit. In the middle of the film where Nathan is really trying to get Caleb to fall into Ava’s deception and manipulation, and he later reveals what he said…

NATHAN: <to Ava through CCTV recording> You think he is watching us right now?

AVA: The cameras are on.

NATHAN: Yeah. The cameras are on. But he doesn’t have an audio feed, so he just sees two people talking, having a little chat. <he picks up a one of Ava’s drawings and laughs> Wow. This is cute.

AVA: It’s strange to have made something that hates you. <Nathan rips the drawing in half>

ALEX: So in the middle of the film Caleb is watching video when he just sees that Nathan is acting kind of erratic in Ava’s room and rips one of her painting–drawings, excuse me–and then he gets angry and he thinks that, you know, Caleb (sic: I meant Nathan!) is a real jerk. Of course, that’s an internal attribution. Well, he finds out later that Nathan was only trying to instigate a different reaction from Caleb and it was purely an external reason. External-ish, OK OK, I’ll grant-I’ll grant you if you’re kind of shaking your head right now. It’s external-ish, because he has, you know, he has external motives, he wants to see whether or not his AI is cunning enough to use it to his advantage as well as other reasons. But I think it’s more external as opposed to town how Caleb reads the situation without audio in the middle of the film. And so attribution theory right there. It’s a-it’s a great film full of little nuanced attribution theory and first impressions and thinking people are doing one thing and-but they’re actually thinking another or the reasons for the behavior is one way but it’s actually another reason. And there are several scenes that are just suspenseful in the sense that you’re not actually hearing any words, you’re just… you’re just in the moment with the soundtrack and it’s kind of–i-i-it’s wonderful in that sense.

ALEX: So that–those-those are my thoughts on impressions and att-attribution theory. I think it’s-I think it’s really–it’s a good central aspect to the film which is why I would use it again in a social cognition or social psychological setting. I think it’s very useful for that setting, and like I said if you haven’t seen it it’s it’s a very <in exaggerated British accent> good show <in normal voice> if I do say so myself. But I will say a word of warning. There is significant nudity toward the end of the film, so you might not want to use it in your class if that is a no-go on that one vs. you know, a film like 12 Angry Men, which hopefully, hopefully doesn’t have any nudity. I wouldn’t be surprised if there is a porn parody of 12 Angry Men… <laughs> Anyways, that’s a great segue into the, I want to say, last psych concept– main psych concept I’ll say, which is attraction. Attraction. So, there–it-it’s more–it’s mostly thin, it’s mostly thin. Not in the-not in the sense that, you know, they don’t do attraction well, it’s just that it’s not as explicit in the movie. It’s sort of made explicit in the monologue from-from Nathan at the end that I that I played earlier in the episode where he is describing what the actual test was, and the attraction, and-and all of that. But there is a sense that Ava was designed to meet the attractive qualities that Caleb desires. And it appears that Alicia Vikander is what he desires. And this was, by the way, was all found through his search history, so that is a word of warning to everyone listening. Maybe… maybe don’t search for pornography on Google or Facebook, because they know and they will know and then eventually they’ll find a robot to come and kill you. Yeah, you know, maybe, maybe you can cut a deal with your robot overlords, but honestly if there’s anything that we could learn from this is that Ava’s real name probably should have been Cambridge Analytica. And that’s a joke for all of you who are avid Twitter users or NPR listeners, <laughs> and really enjoy movies. I-I can’t tell you who those people are it may not–it may just be me. It may just be me. And that joke was a chuckle for myself. And I got to tell ya, I’ve done it before, and I’ll likely do it again. So don’t you worry about that! Don’t you worry about that. I totally will make a joke just for myself. So the idea here is that there are-there is attraction concepts throughout and it’s pretty clear toward the end of the film that Caleb was working with information that he thought was right–so-called moral compass. The fact that he didn’t have a girlfriend and he found Ava attractive, and you can bring in pieces of love into this and talk about Sternberg’s Triangular Theory of Love, and-and, you know, he’s only known her for 6 days, 5-6 days, before he decides to side with her and not with Nathan, the boss of the company, his boss. We could-we could probably call that fatuous love at this point–or infatuation, not fatuous love, sorry, that’s the wrong one–and call it infatuation. And, you know, he wants to do stuff, maybe. It’s kind of gross, actually, at the-there’s a part of it where Nathan’s like, “yeah, there’s, I mean, I built her to be, you know,  this, that, and the other thing. 100% ready for sex!” And you’re like, “what-what? What? What–dude! Dude!” So, you know, there is a little bit of-there a little bit of that in the film, which I guess you could take it or leave it at that point. So is Ava intelligent? That is: is she conscious? Does she know what she’s doing? I would say yes, in the context of the film, in the logic-logical consistency of the film, she is. And it beats another thing–the final thing I wanted to say about AI and robots and and the singularity moving forward. Just to-just to finish out the conversation of the film in this sense. Is for consumers of AI and consumers of robotics–that is, flesh bags, you know, Stay Fresh Cheese Bags, sort of thing, where we are going to either have these as working or our, you know, slaves–AI… the film AI comes to mind; the film Bicentennial Man comes to mind. There’s a psychological concept that sort of right now precludes us being fully accepting and–us being humans in general–fully accepting what the nature of-of robots and androids is. And that is the features, the appearance of-of robots and androids, and what is called the uncanny valley. So we have a liking for things that are humanlike to look like humans and we have a liking for things that-that are not meant to be human that don’t look like humans, OK. So that forms the liking peaks. And so you end up with this valley of things in-between of humanlike things, things that look like humans but don’t actually–or supposed to look like humans, I should say–but don’t actually look like humans… something’s off about them. And they fall into this liking valley–so very little liking–that’s the uncanny valley. And we have a supreme disgust for things that fall in this valley. So things like zombies, for example, right? They were once humans but now, you know, their flesh is rotting off their face, maybe the other they got to eat–half eaten by a zombie dog or something like that. And-and, or half eaten by a zombie person and you see them coming to you and you’re like “Ugh! Ooh! Aahh!” You’re like “I don’t want to die!” But then you’re also like, “Oh God, they look disgusting! What?! Eeeww!”  You know, that obviously helps the horror part of zombie lore and genre. But it also explains why we are not fans of the robots that are being made by, you know, those Japanese robotics makers that are trying to make these lifelike looking robots that just move oddly. There was the funny conversation between Will Smith and one of those real language bot that were created, and-and her face doesn’t move all that convincingly and there’s very little expression. Ava mentions that she has the ability to read microexpressions on Caleb’s face in the film, and I mean–it obviously it has nothing to do with with Ava’s uncanny valley… of which she doesn’t, because she’s played by a human and her face is modeled to be human and this is set in some science-fiction time, and so we don’t actually know when the film is set and so maybe Nathan just really knows how to make things look human. But that’s why we don’t–that’s part of the reason why we don’t like looking at these robots that still don’t look good and we’re just like, “Ah, God, that’s gross. I would not want that to be in my house! Why would I want that to be in my house? Oh, God no!” That’s why people are so freaked out by, including myself, you know, those Annabelle dolls and others very lifelike looking dolls. It’s not-it’s not great, it’s just creepy. It’s very, very creepy but it also explains why we love ASIMO, the-the Japanese robot that is bipedal but has a, you know, a very round face. That’s why we love WALL-E. WALL-E–he’s amazing, right? He has eyes and we can see a face, but he’s not meant to be humanlike at all, and we’re like, <in baby voice> Aw, wook at the widdle robot! Aw, he’s so freaking cute! Aw, that’s so lovely!” <in normal voice> And-and we like Eve because–in WALL-E if you’re not familiar–because Eve also doesn’t have a human face. It’s rounded like ASIMO’s and then has eyes and sometimes a mouth, and, you know, we are like, “oh, that’s so cute!” But we’re not there yet with humans. <laughs> Yeah we’re not there yet with humans, yeah, of course we’re not there. Many of us are not there yet with humans–we look at other humans and we’re like, “Ew, God, no!” No, <laughs> we’re not-we’re not there yet with humanlike androids. There’s even a slight sense of disgust among viewers and characters in Star Trek: The Next Generation with Data. He, you know, he has-he has a very pale face. He has yellow eyes. But he’s played by real human, Brent Spiner. And-and you know people are like, “it’s a little off-putting.” I mean, he’s not down at the bottom of the valley, but he is a little bit lower than Brent Spiner, unless you’re not a fan of Brent Spiner, I suppose. But you know in that case, get out of here! What are you doing listening to me? Lieutenant Commander, and subsequently Commander, Data is an amazing person, so… and, yeah that’s right, I called him a person. He won that. He won that fair and square. He’s sentient. He gets it. And you give it to him! You give it to him! He is a person, Starfleet! Anyways that’s what I wanted it to end with, the uncanny valley, you know, so the next time you see a human face that doesn’t look quite right, you’re fine, you’re fine, it’s just the uncanny valley making you feel disgusted by it. Let’s hope it’s just not the mirror, OK? So yeah, that’s-that’s all I have for this film.

<film reel sound effect>

ALEX: It-it was so fun actually just riffing a little bit. Sorry for the, I want to say, high tempo nature of this. There’ll probably be periodic solo episodes, but you know it’s not the normal format of the-the podcast, of the show, so, you know, don’t expect them to be every other episode or something like that. But you know, the nature of the time of the year I thought I’ll just throw it up there. Feel free to play me at half speed, if you can, if you know how to do that. Yeah. Yeah. I appreciate you taking the time out of your holiday schedule to listen to this one. And if you’re listening at a different time of year, thanks for listening too! Please, please be sure to like, subscribe, give us a rating, you know, that sort of thing. And if the holiday cheer and holiday season tickles you just a little bit, please help support the podcast, and the running of the podcast. If you have some spare change you can use PayPal or Patreon to send it our way and make sure that we can keep the lights on, on this beautiful place which is the Internet that stores these audio files. Audio files are huge. So, you know, if you get if you get the opportunity and if  you have-if you have the inkling, we would love your support and it would mean a great deal to us, even a couple of dollars goes a really long way in keeping this podcast going. And until the next episode, thanks for listening…

<electronic music plays and fades out> 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.