In this segment, we're going to talk a little bit about our professional responsibilities and how we think about ethics when we think about collecting data and doing research at scale. So, I was first going to ask Professor Hogan, as a professional UX practitioner, what do you think of as your ethical obligations to the people you work with, the people you use for data, how do you think through these issues? So, I think that it manifests on several levels. The first one really being I always try to make sure that we're doing no harm with the participants that we're working with, and we do that in several ways. First, we still do when we're doing remote unmoderated research. We administer, essentially inform consent forms so that people are well aware of how it is that their data is going to be used, the feedback that they're sharing, and then who's going to be seeing it because it's only fair to set expectations about who's going to be seeing, what it is that they're saying and what it is that they're doing. I think the other piece that's important is that when you're working with a client, you need to make sure that you are honoring what it is that they feel is important to you. So, making sure that you're focusing on groups of people and populations that they're interested in but also that you are studying their expectations about what's possible and what you can be doing with different groups of people. I think part of that is, for instance, we don't do a lot of work with minors because from a legal perspective, there are a lot of laws that govern what you can do with minors at least in the United States. We want to make sure that we're also dealing with people who you would say have agency. Primarily, that's always going to be the case with a lot of the people that we tend to do research with in a professional setting because they have access to the resources to travel, for instance, to do research in a lab or to the technologies that are required to do research when you're doing it remotely. So, do you think you as professionals have any other obligations to their clients or to the people they're working for? Is there anything else they should be thinking of as they're collecting data on behalf of somebody else? Yeah. So, I think there there are several things. One is really making sure that they understand that they have rights too. There's this certain amount of information that you don't want to be sharing outside of the relationship that you have with them as well. So, understanding who that data's going to be going to, and sometimes that can even mean there are hard barriers between groups that are at the same organization, and I've definitely seen that where there are limits with regard to what it is that information that can be shared across those organizations. I do think also just being aware of some of the internal politics and policies around how it is that you can share information about, who it is you're doing research within for. I know that there are a number of companies that actually do not perform any UX research outside of the four walls of the company, just to keep information within the four walls, corporate information within the four walls. I hadn't thought about trade secrets or anything like that as an ethical issue, but that's very much there. Trade secrets are definitely an ethical issue as well, and there are even some organizations who will even hire contractors with that sensitivity in mind. Right. If a colleague comes to you and ask questions about confidentiality of user data or privacy or concerns about personally identifying information, what are your standard practices around those things? Yeah. So, we are very careful to carefully preserve PII, which means basically not exposing any PII to our clients. PII is personally identifying information. Yes, and then PHI as well so personally, yes, also health information as well. For several reasons, it's important to keep a wall in between, I think, researchers and the people that they are doing research with because there are some points in time when people are providing feedback and saying things that they wouldn't necessarily feel comfortable saying to someone necessarily that they know. One of the great benefits of especially remote unmoderated research is that you have less moderator bias because there isn't somebody there to react. So, we are very careful about PII because we want to make sure that we're not exposing people's names and addresses and locations. That information, we even have blurring functionality that would prevent that information from being collected. We instruct all of our clients, "Hey, you cannot have people even complete tasks, where that kind of information would be exposed." It's a broad conception of do no harm that you talked about earlier because in this case, you talk about do no harm as in, even stop yourself, build these walls and not collect PII so you can't do harm. Another way, I have to think of do no harm is try to make everything as easy and efficient for the respondent as you can so that you're respecting their time by not wasting it through bad design or through experiments that don't work or that you're not going to use the data for. So respecting people's time as a do no harm thing too. Yeah, definitely agree. So, setting expectations upfront. I think that happens on both sides, both the client side as well as the participants side. I also think with regards to taking that step about thinking about the type of research that you're doing where you're doing A/B testing. That is a real, I think, tough spot actually for researchers because do no harm means something a little bit different there. Partially, it's because sometimes you don't know what the unintended consequences are going to be of structuring an interaction in a certain way, so, how it's going to impact the people that you're doing the test with. So, I think it's important to make a judgment call and a good one that is really foundationally about doing no harm but also not just about that because I think that's really a backstop to, you actually want to be doing things that are beneficial for the people. That's right. As somebody who does more contract in their work, do no harm also applies to the system or the tool or the community that I'm doing work with. So, it's possible to do harm to an individual, but I've also found that it's possible to do harm to an entire community or to entire platform if you're not careful. So, you have to think very broadly about who are the types of people you're impacting and how do you protect them. Yeah, you bring up a great point. I'm thinking about an example of a researcher on my team working with the community, and they actually had to stop research because, it wasn't like huge outcry, but it really a recognition that what was happening was not in the best interest of that community and then hitting the reset button. Where do you recommend people go if they have more questions about ethics or they would like to learn more about this topic. Wow, there are many places I think because I actually think that there's a pretty large community of people who were thinking about ethics and doing research in this area. I often recommend the UXPA site. So the User Experience Professionals Association has a great section on ethics and a lot of great articles about these as well. UXmatters does too as well. I think that there's a lot there, and I also think that, there's a recent book that's out and I'm forgetting the name of it right now, but really talking about some of the work that's been ongoing at Facebook. I know Facebook actually established a whole site, specifically about what it is that they're doing around ethics and privacy and changing their practices. Great. Well, the ethics is a big issue for UX researchers. If you want to learn more, please go to some of those sources that we mentioned. Otherwise, be very careful and remember to do no harm.