By Adam Turteltaub Michael Savicki, Senior Vice President and Chief Risk & Compliance Officer at American Express Global Business Travel (Amex GBT), best known as Amex GBT, knows the challenges and opportunities in merger’s and acquisitions. The company recently completed the acquisition of CWT, a global business travel and meetings solutions provider. In this podcast he shares their playbook for effective due diligence, born out of their experience and the heightened regulatory requirements that they face. Among the insight he provides: Integrate your efforts with the business unit’s and work cross-functionally Partner with finance and the commercial team Have a solution-oriented “yes and” mindset Be sure the due diligence process focuses on all the risks: legal, regulatory, operational and reputational Perhaps most importantly: think beyond the transaction. Look to what the acquired entity will need post-acquisition. Embrace the technology that will help get you where you want to be, including AI, which can help spot emerging risks sooner, while freeing your team up to do more strategic work.
By Adam Turteltaub Neurodiversity tends to be spoken of as an issue to be recognized and, quite often, as a barrier to overcome. Katie Roemer, Vice President, Compliance & Privacy Officer at Alta Hospital Systems see it differently: as an asset to your compliance team. In this podcast she points out that many neurodivergent people excel at pattern recognition and system level thinking, as well as root cause analysis, all of which are of great value to compliance teams. They can also help us to communicate better. Meeting their needs can help with general workforce training. Some examples include: Avoiding densely packed slides with light fonts that are hard to read Breaking the learning up into discreet pieces Previewing what is going to be learned and the length of training Letting the audience know what is the most important part of the training Giving key takeaways and highlighting key points To leverage the neurodiverse fully, she recommends creating a psychologically safe environment that encourages everyone to speak up without fear of consequences. This enables the expression of a diverse range of ideas from the entire team. Listen in to learn more about the how the neurodivergent can be an asset to your compliance efforts.
By Adam Turteltaub What do a secret wedding and Richard Nixon have in common with HIPAA? A lot more than you might think, shares Bailey Mack, Chief Compliance Officer at Together for Youth. In this podcast she tells us the interesting history of privacy and the law. We begin in 1890 when a photographer trespassed to photograph a wedding he wasn’t supposed to be photographing. Thirty eight years later in the Olmstead case, wiretapping wasn’t deemed intrusive because no one entered the room. It was as if a privacy violation could occur only if there was trespassing involved. That began to change in the 1960s in which thinking evolved and the idea gained currency that privacy was about violations of the person’s right to privacy, rather than to property. Watergate led to further changes in which citizens were given access to government records about them. And, since then, more legislation has come and likely will. Listen in to learn more, and if you’re an SCCE or HCCA member, don’t miss her article in Compliance & Ethics Professional® magazine.
By Adam Turteltaub Executive presence isn’t simply walking in the room and having everyone instantly feel that that you are in charge. It is something different explains Jay Greenberg, the recently retired Chief Compliance Officer at the FBI. Instead, it is being powered by your core values and then making a maximum positive contribution to any situation by fully investing yourself to achieving that assigned mission. Executive presence, he shares, is a skill acquired through the application of experience, coupled with a great deal of self-reflection that focuses on self-confidence, core values and the help of mentors. Also of great value: preparation and confidence that is informed by past experiences, including failures. Even star leaders didn’t magically emerge, he reminds us. They learned from their failures, missteps and other learning experiences. It doesn’t matter, he explains, if you are working with leadership or rank and file employees. Know your core values, who you are, your positive character traits and focus ahead of time. It will help you feel self-contained and confident. He also advises keeping a bit of mental distance, being both a participant and an observer at the same time. It will help you tailor your approach to the outcome you want. Also, be sure you understand the perspective of your audience. Listen in to learn more about how you can master the skills of executive presence.
By Adam Turteltaub Listen up people: It’s all about the people. That’s the key message from Gabor Sulyok, Global Head of Commercial and Healthcare Compliance at BioNTech and experienced senior compliance counsel Luciane Mallmann. At its core, ethics and compliance is a human endeavor. While regulations and standards provide the structure, it’s the people within an organization who bring these principles to life. A people-centered approach to compliance programs enhances engagement, supports better decision-making, and fosters a culture of integrity. From design to execution, every aspect of the program should reflect a deep understanding of how people learn, behave, and interact. This means rethinking how we educate, maintain awareness, and ensure accountability. Policies must be relatable and actionable. Training should be immersive and role-specific. And accountability should be balanced with support to avoid creating a risk-averse culture. They explain in the podcast that there are three key elements of a people-centered framework: Speak-Up Culture A healthy program starts with psychological safety. Employees must feel empowered to raise concerns without fear of retaliation. Transparency and Accountability Transparency builds trust. Training should include real-life ethical dilemmas and storytelling that resonates with employees. Sharing actual cases from within the company helps humanize compliance and makes it more relatable. Accountability, meanwhile, must be visible and fair. Leaders should model ethical behavior and be the first to complete training, setting the tone from the top. Ethical Decision-Making Decision-making frameworks should incorporate diverse perspectives and encourage thoughtful deliberation. Employees need tools to navigate ambiguity, and those tools must be grounded in the organization’s values. Listen in to learn more about how to put people front and center in your ethics and compliance program.
By Adam Turteltaub The rise of generative AI has brought transformative potential to healthcare—from streamlining administrative tasks to supporting clinical decision-making. But alongside these benefits comes a growing concern: Shadow AI. Alex Tyrrell, Chief Technology Officer, Health at Wolters Kluwer explains in this podcast that this term refers to the use of unauthorized, unmonitored AI tools within organizations. In healthcare, where data privacy and patient safety are paramount, Shadow AI presents a unique and urgent challenge both now and in the future. Healthcare professionals often turn to generative AI tools with good intentions—hoping to reduce documentation burdens, improve workflows, or gain insights from complex data. However, many of these tools are unproven large language models (LLMs) that operate as black boxes. They’re prone to hallucinations, lack transparency in decision-making, and may inadvertently expose Protected Health Information (PHI) to the open internet. This isn’t just a theoretical risk. The use of public AI tools on personal devices or in clinical settings can lead to serious consequences, including: Privacy violations Legal and regulatory non-compliance Patient harm due to inaccurate or misleading outputs Despite these risks, many healthcare organizations lack visibility into how and when these tools are being used. According to recent data, only 18% of organizations have a formal policy governing the use of generative AI in the workplace, and just 20% require formal training for employees using these tools. It’s important to recognize that most employees aren’t using Shadow AI to be reckless—they’re trying to solve real problems. The lack of clear guidance, approved tools, and education creates a vacuum that Shadow AI fills. Without a structured approach, organizations end up playing a game of whack-a-mole, reacting to issues rather than proactively managing them. So, what can healthcare organizations do to address Shadow AI without stifling innovation? Audit and Monitor Usage Start with what you can control. For organization-issued devices, conduct periodic audits to identify unauthorized AI usage. While personal devices are harder to monitor, you can still gather feedback from employees about where they see value in generative AI. This helps surface use cases that can be addressed through approved tools and structured programs. Procure Trusted AI Tools Use procurement processes to source AI tools from vetted vendors. Look for solutions with: Transparent decision-making processes Clear documentation of training data sources No use of patient data or other confidential information for model training Avoid tools that lack explainability or accountability—especially those that cannot guarantee data privacy. Establish Structured Governance Governance isn’t just about rules—it’s about clarity and oversight. Develop a well-articulated framework that includes: Defined roles and responsibilities for AI oversight Risk assessment protocols Integration with existing compliance and IT governance structures Make sure AI governance is not siloed. Those managing AI tools should be at the table during strategic planning and implementation. Educate and Engage Education is the cornerstone of responsible AI use. Employees need to understand not just the risks, but also the right way to use AI tools. Offer formal training, create open forums for discussion, and build a culture of transparency. When people feel informed and supported, they’re more likely to choose safe, approved tools. Protect PHI with Precision In clinical workflows, PHI is often unavoidable. That’s why it’s critical to: Deidentify patient data whenever possible Ensure only authorized systems, processes, and personnel have access to PHI Maintain up-to-date business associate agreements and data processing contracts As you get closer to the bedside, the margin for error shrinks. Public devices and unlicensed LLMs should never be used in direct patient care. The regulatory landscape around AI is evolving rapidly—especially at the state level and in the EU. Even if federal guidelines are still catching up, organizations must be proactive. Bake privacy by design into your AI strategy from the beginning. Treat compliance not as a burden, but as a strategic advantage that protects patients and enables innovation. And be sure to listen to this podcast to learn more about the risks of shadow AI
By Adam Turteltaub There are few parts of an investigation that are more stressful than the interview with the investigation’s subject. Done right it can close all the loops. Done wrong, everything can unravel. To learn how to handle things best we turn in the second of our two podcasts on investigations to Wendy Evans, Senior Corporate Ethics Investigator, Lockheed Martin and Georgina Heasman, Senior Manager, Global Investigations at Booking Holdings. The two of them are the co-authors of our new book Fundamentals of Investigations: A Practical Guide and lead our Fundamentals of Compliance Investigations Workshop. In this podcast they offer a host of great insights including: While it’s generally best to interview the subject last, there are times, such as in cases of alleged harassment or data theft, where you likely will need to sit down for a preliminary interview sooner Be sure to get a read on the subject and be respectful of the stress that they are under, including giving them psychological space before asking tough questions Clarify your role in the process as a collector of facts and that you have not already decided that they are guilty Invite them to share their perspective both in the interview and, if other things come to mind, afterwards Remind them of the confidentiality of the process and the need to focus on the allegation, not who made it Listen in to learn more, and be sure to investigate their book Fundamentals of Investigations: A Practical Guide and the Fundamentals of Compliance Investigations Workshop.
By Adam Turteltaub Few people know more about conducting a compliance investigation than Georgina Heasman, Senior Manager, Global Investigations at Booking Holdings and Wendy Evans, Senior Corporate Ethics Investigator, Lockheed Martin. The two of them are the co-authors of our new book Fundamentals of Investigations: A Practical Guide and lead our Fundamentals of Compliance Investigations Workshop. Not wanting to miss out on their expertise, we scheduled two podcasts with them. In this, the first of the two, they share a broad overview of best practices for conducting investigations. Those include ensuring that even compliance team members not responsible for investigations have at least a fundamental understanding of them. As for the investigation itself, they explain, to go well it begins with the first report. There has to be a clear line of communication and a culture that encourages employees to come forward. Once you receive that initial contact, it’s important to remember that it tells the story only from one side. You need to ask questions to clarify what was seen and heard and start thinking about what other information you will also need to gather. To keep the information flowing, they recommend telling the reporter and everyone else you interview to reach out to you again if additional information comes to mind. While testimonial evidence is invaluable, don’t stop there. As you gather the who, what, when and where, be sure to look for the documentary evidence that you need, which requires having strong relationships with departments that have it, such as HR and security. And, throughout the process, stay focused to avoid going down rabbit holes or getting inundated with more information than you need. Listen in to learn more, and be sure to check out Fundamentals of Investigations: A Practical Guide and the Fundamentals of Compliance Investigations Workshop.
By Adam Turteltaub Uh oh. The Feds are in the front lobby with a search warrant. Things are bad, and you don’t want anyone on site to make it worse. The secret is preparation, shares Veronica Xu, SCCE & HCCA Board Member and Chief Compliance Officer, HIPAA Privacy Officer, ADA Administrator at Saber Healthcare Group. That begins with establishing a cross-functional team that likely includes compliance, the general counsel, CEO, CTO and, depending on your industry, the chief medical officer and others. Each should play a part in shaping the plan and be ready to play their part if a raid occurs. In addition, onsite staff, right down to the receptionist, needs to understand their responsibilities, including whom to call for help. Not only will that avoid very costly mistakes, it will help reduce errors, fear and stress at what will likely be an extremely difficult time. What an individual gets trained on will vary by role. Yet, there is one commonality to the training. Everyone needs to know the importance of staying calm, being polite and respectful. Be sure to also outline the do’s and don’ts. There’s one other thing she strongly advises: remember to communicate with your workforce. Be as transparent as possible and avoid conflicting messages. That will keep the lines of communication open and help avoid the speculation that can make the disruption even worse. Listen in to learn more, and then take a fresh look at your current plans for responding to a government raid.
By Adam Turteltaub Employees may trust an AI chatbot more than they trust you, and that’s not necessarily a bad thing, if it leads to more reporting. In this podcast, Debbie Sabatini Hennelly, Founder & President of Resiliti shares that a recent survey conducted by Case IQ reveals that nearly 70% of respondents expressed no concerns about AI being involved in the helpline process. This openness is driven by several key factors: increased anonymity, ease of use, and a perception that AI offers a fairer, more impartial experience than speaking directly with a human. These findings underscore a broader theme that continues to emerge in conversations about helplines: trust. Employees are more likely to report concerns or misconduct when they trust the system—when they believe their information will be handled confidentially, their identity protected, and their report taken seriously. Not surprisingly, they also want to understand how their information is being used and how their anonymity is being safeguarded. This is especially important when helplines are outsourced to third-party vendors. Communicating clearly that the helpline is external—and therefore more secure and impartial—can go a long way in building trust. But transparency doesn’t stop there. Employees also want to know what happens after they make a report. What’s the process? What can they expect next? Setting clear expectations and following through with updates helps reinforce that the organization is responsive and serious about addressing concerns. It’s not enough to share this information only once a year during compliance training, she warns. Employees are constantly bombarded with messages and unless helpline communication is consistent and visible, it risks being forgotten or ignored. Still, even with those reminders, barriers remain, especially fear of retaliation. Organizations must address this head-on. First, there must be a clear, well-communicated prohibition against retaliation. But more importantly, leaders need to understand that retaliation isn’t always overt. It can be subtle—being passed over for key assignments, being excluded from team activities, or receiving the cold shoulder from colleagues. Creating a culture where employees feel safe to speak up starts with leadership. Managers and executives must model the right behaviors, reinforce anti-retaliation policies, and foster an environment where concerns are welcomed, not punished. One of the most critical—and often overlooked—elements of a successful helpline program is training leaders on how to respond when a report is made. Too often, well-meaning managers try to “get to the bottom of it” themselves. But when they start asking who reported what or conducting their own informal investigations, they can unintentionally obstruct the formal process and make employees feel unsafe. A favorite tactic of hers for addressing this is to ask persistent leaders: “Do you want to be a witness and be deposed?” It’s a powerful reminder that involvement in an investigation has consequences—and that the best way to support the process is to let it unfold professionally and confidentially. Listen in to learn more, and, hopefully, get employees to trust and speak-up more.
By Adam Turteltaub If all you’re worrying about is tone at the top, you’re missing a key portion of the choir. With most people reporting to middle managers, they play in integral role in ensuring a culture of compliance and ethics truly permeates the organization. Evie Wentink, Senior Compliance Consultant at Ethical Edge Experts observes that while many organizations invest in crafting comprehensive codes of conduct and articulate expectations for ethical leadership, they often fall short in equipping managers with the tools, training, and support necessary to fulfill those expectations. This gap can undermine the effectiveness of compliance efforts and leave companies vulnerable to ethical lapses. At the heart of the issue is a lack of intentional communication. Middle managers are frequently expected to embody and promote ethical leadership, yet they are rarely given a clear understanding of what that entails. To bridge this gap, organizations must develop structured plans that define ethical leadership in practical terms. These plans should include specific deliverables, resources, and expectations tailored to the manager’s role. By doing so, companies can ensure that managers are not only aware of their responsibilities but also empowered to carry them out effectively. Authentic, ongoing conversations led by these managers are a cornerstone of a successful compliance culture. These discussions should not be limited to formal training sessions or annual reviews. Instead, they must be woven into the fabric of everyday operations. Managers should be encouraged—and required—to initiate “ethics or integrity minutes” at the start of team meetings. These brief segments provide a consistent opportunity to address ethical topics, reinforce values, and normalize open dialogue about compliance issues. To support these conversations, organizations should provide managers with practical tools. These might include: Ethics spotlight cards that highlight key compliance themes. News articles that can be used to spark discussion around real-world ethical dilemmas. Access to updated policies and codes of conduct, with notifications when changes occur. Tracking and analyzing these conversations is equally important. Compliance teams should maintain records of who is engaging in discussions, what topics are being covered, and which issues are generating the most questions. This data can be invaluable in identifying risk areas, refining training programs, and tailoring future communications. Often, the most common questions arise immediately after a training session, indicating that such moments are prime opportunities for deeper engagement. Moreover, it’s essential to recognize the broader impact of middle management on organizational integrity. Prosecutors and regulators increasingly view middle managers as pivotal figures in corporate misconduct cases. Their actions—or inactions—can significantly influence whether a company succeeds or fails in maintaining ethical standards. Consequently, fostering a culture of accountability and proactive communication at this level is not just beneficial—it’s critical. Ultimately, the goal is to create an environment where ethical conversations are natural, frequent, and valued. When managers consistently lead by example and facilitate open dialogue, employees become more comfortable raising concerns and asking questions. This cultural shift enhances transparency, reduces risk, and strengthens the overall integrity of the organization. In summary, bridging the compliance gap at the middle management level requires a multifaceted approach: clear expectations, practical tools, authentic conversations, and ongoing tracking. By investing in these areas, organizations can transform their compliance programs from static documents into dynamic, living systems that truly support ethical behavior at every level from the top on down.
By Adam Turteltaub Why did the AI do that? It’s a simple and common question, but the answer is often opaque, with people referring to black boxes, algorithms and other words that only those in the know tend to understand. Alessia Falsarone, a non-executive director of Innovate UK, says that’s a problem. In cases where AI has run amok, the fallout is often worse because the company is unable to explain why the AI made the decision it made and what data it was relying on. AI, she argues, needs to be explainable to regulators and the public. That way all sides can understand what the AI is doing (or has done) and why. To create more explainable AI, she recommends the creation of a dashboard showing the factors that influence the decisions made. In addition, teams need to track changes made to the model over time. By doing so, when the regulator or public asks why something happened, the organization can respond quickly and clearly. In addition, by embracing a more transparent process, and involving compliance early, organizations can head off potential AI issues early in the process. Listen is to hear her explain the virtues of explainability.
By Adam Turteltaub Despite being a Civil War era statute, the False Claims Act (FCA) always has something new going on. To find out what’s hot these days, we spoke with Joshua Drew (LinkedIn), a former federal prosecutor and chief compliance officer and currently a Member at Miller & Chevalier. Lately, he explains, there has been a steady stream of activity. May: The Civil Rights Fraud Initiative was announced by the administration and proposes to use the FCA against any federal funding recipient that it believes are operating DEI initiatives that violate antidiscrimination laws. July: A new working group was created between the DOJ and HHS to focus on healthcare and life sciences. It encouraged whistleblowers to file action in areas such as Medicare Advantage, drug device and biologics pricing and barriers to patient access, amongst others. August: A trade task force was created to encourage whistleblowing against tariff violators. All of this occurs against a backdrop of activity by the Administration to identify and fight waste, fraud and abuse. Listen in to learn more about where the Administration is focusing and what compliance teams can learn from recent actions.
By Adam Turteltaub The possibilities of AI don’t stop with generative AI such as ChatGPT. Agentic AI may have more potential for compliance teams, Zahra Timsah, co-founder and CEO of i-GENTIC AI tells us. Unlike generative AI, which is well known for its ability to create content, agentic AI can be used an internal enforcement agent. Trained properly, she tells us, it can look for a potential violation and stop it. For example, it can spot personal health information that is about to be transferred and redact the sensitive data automatically. This ability to step in and take action will, she believes, free compliance teams from many routine tasks and allow them to shift their focus to matters that are more complex and fall within the grey area. It will also help teams speed up the rate in which new laws and regulations turn into effective internal policies. In addition, agentic AI will be able to produce measurable value by demonstrating what it can do to manage risk, improve trust and increase efficiency. Listen in to learn more about agentic AI’s ability to improve your compliance program.
By Adam Turteltaub Lewis Eisen (LinkedIn) is the author of the book RULES: Powerful Policy Wording to Maximize Engagement, and he wants to change the way people think about and write policies. Too often, he observes, policies contain parent-child language, with a scolding tone that turns people off and keeps them from wanting to read the policy, or even follow it. It also contains a great deal of complexity, laying out all the many processes and procedures. Instead, he recommends that companies adopt policy statements that are simpler and can tie values that people can identify with. All the other stuff – complex procedures, examples, backgrounds and so forth – belongs elsewhere he argues, for employees to see after they have had the opportunity to see the policy and buy into it. It’s an intriguing approach. Listen in to learn more about how to reimagine your policy-making process.
By Adam Turteltaub Andrew McBride, Founder & Chief Executive Officer at Integrity Bridge, recently wrote an article entitled Generative Artificial Intelligence Use Cases for Ethics & Compliance Programs. Intrigued by the topic, I sat down with him for this podcast. He shared that many compliance teams are charged with using AI but may not have the desire or know how to create and implement a use case. He shares that AI is very good at doing a specific role and a specific activity. Consequently, compliance teams should consider not just the use of AI as a whole but specific needs that they have for it. He gives five specific use cases: Interpreter. AI can translate documents and training in seconds. It can also help you distill long documents into pithy, usable summaries both for you and management. Drafter. It can draft from scratch or improve what you have already put together, even creating interactive scenarios that can be useful in training. Researcher. You do have to be mindful of hallucinations, but if you set up the AI to only use your own data or a trusted set of ources, it is more reliable. Do, though, always check its work. Data Analyst. As compliance teams are called to amass and analyze more data, AI can help you do it, identifying, for example, relationships between training and calls to the helpline. Monitor, Investigator, Auditor. AI can review both structured and unstructured data, helping you identify red flags. Listen in to learn more, and then, start building your own use case for generative AI.
By Adam Turteltaub Why? Why are you asking that? Do you really need to know it? Is it going to tell you something you need to know? Is it a question that anyone could even answer? All of these are questions to ask yourselves and colleagues when they propose adding an item to your due diligence questionnaire. As Kristy Grant-Hart (LinkedIn), author, speaker and Head of Advisory at Spark Compliance, which is now owned by Diligent, explains, too often due diligence questionnaires are filled with questions that are unnecessary at best and counterproductive at worst. They are born out a desire to cover all the bases not necessarily get you just the information you need. Instead of throwing in everything including the kitchen sink, it’s far better to take, as elsewhere, a risk-based approach. Work directly with those who own the risk review. And, if the response doesn’t matter, don’t ask the question. Listen in to learn more about how to create a due diligence questionnaire that gets the answers you need, and not the ones you don’t.
By Adam Turteltaub With ever more attention paid to the role of boards in overseeing compliance, the question naturally comes up: Do boards even understand what makes for an effective compliance program? To help answer that question we spoke with Vera Cherepanova (LinkedIn), Executive Director of the non-profit Boards of the Future. She shares the unfortunate news that many boards are not where they should be. They are not fully seeing culture as a risk factor and driver of misconduct. Nor do many understand their own duty to manage it. That’s dangerous in these times, especially now that governments are paying closer attention to culture. Forces, though, are starting to change the equation and force boards to understand the role they and compliance play together in ensuring both integrity within the company and business success. Supply chain issues and ESG, for example, have brough compliance in closer contact with the governing authority. So, too, is regionalization. As countries take divergent paths into more and more issues, the compliance team will be essential in helping the board understand the risks that they face. More, though, will need to be done. Boards need to start addressing issues such as values conflicts like they do other risks. And, more people with compliance experience should be added to boards. Listen in to learn more about what boards are and are not doing.
By Adam Turteltaub With a rising focus on value-based care, and a new program seeking to make the approach mandatory, we spoke with Ed White (LinkedIn), Partner at Nelson Mullins. Previous efforts to move toward value-based models, such as Accountable Care Organizations (ACOs), faced significant barriers due to regulatory frameworks like the Stark Law and Anti-Kickback Statute. These laws were designed to prevent financial incentives from influencing medical decisions, but they also limited the ability of hospitals and physicians to collaborate in ways necessary for effective value-based care implementation. Recognizing these constraints, CMS and the Office of Inspector General (OIG) collaborated in 2020 to issue new regulations aimed at facilitating the transition to value-based care. The next step in the transition is the new Transforming Episode Accountability Model or TEAM program, which will become mandatory in 2026. This program includes 740 hospitals across the country and targets five specific surgical procedures. Participating hospitals must coordinate care with a range of providers—including specialists, primary care physicians, labs, durable medical equipment (DME) providers, hospice agencies, and others. The TEAM program is designed to last for five years, during which time hospitals are responsible for ensuring that patients are connected to appropriate post-discharge care, including follow-up with primary care providers. The goal is to reduce complications, avoid emergency room readmissions, and promote better health outcomes—all while keeping costs below a CMS-established target price. To drive efficiency, the TEAM program introduces three financial risk “tracks”: Upside-only track – Hospitals can earn shared savings if costs come in below the target price. Moderate risk (upside/downside) track – Hospitals can either earn savings or incur penalties depending on performance. Full-risk track – This track will offer both greater risks and rewards. According to industry consultants, two-thirds of participating hospitals are expected to lose money in the early phases of the TEAM program. Hospitals must rethink their compliance, care coordination, and partnership strategies in the wake of these changes. Listen in to learn more about what this all means for your compliance program both today and in the future.
By Adam Turteltaub Imagine that it’s time to move on from compliance to another role, either by choice or being voluntold. Does what you learned in compliance help? Absolutely, according to Kortney Nordrum, Vice President and Senior Corporate Counsel at Deluxe. Amongst other benefits, it taught her how to break down large issues into more manageable pieces, better identify and manage risks and help deals close. That isn’t to say the transition has come without challenges. She has had to learn to trust others to run compliance and also to be less risk averse. Listen in to learn more about how your compliance skills can help if your career ever takes you to another profession.