Assiaq’s Code of Conduct

No harassment

Assiaq will not tolerate any harassment within its team, from its audience, or during its events. Types of harassment are considered the following but not limited to:

  • Inappropriate physical contact
  • Unwelcome sexual attention
  • Display of sexual images in public
  • Deliberate verbal or physical intimidation
  • Sustained disruption of discussions, talks or other events
  • Advocating for or encouraging any of the above behaviors

No discrimination

Assiaq will not tolerate any discrimination within its team, from its audience, or during its events. Types of discrimination are considered of the following but not limited to:

  • Race
  • Ethnicity
  • Gender
  • Gender identity and expression
  • Sexual orientation
  • Disability
  • Physical appearance
  • Body size
  • Age
  • Religion

Artificial Intelligence Ethics

TL;DR

Sifted is committed to using generative AI tools responsibly and transparently. We continually monitor and adapt our practices to ensure ethical use. We welcome community feedback and are dedicated to being open about our methods.

Thank you for helping us maintain high standards in using cutting-edge technology!

Text-to-Image Generation Tools

We are passionate about exploring technology’s potential to create innovative and engaging content. Text-to-image generation tools are among the most exciting technologies available today, and they offer us endless possibilities to enhance our storytelling and enrich the visual brand.

While we also work with graphic designers, artists, and other visual media assets, text-to-image tools are an additional way for us to bring our stories to life. We understand the questions and concerns surrounding these new tools, so we’ve established this code of conduct and are making it public.

Our Commitment to Responsible and Transparent Use

We pledge to use text-to-image generation tools responsibly and transparently, respecting the rights of artists and creators. We welcome your feedback and criticism about our use of these tools.

Responsible and Transparent Practices:

  1. Model Monitoring and Credit: We monitor the models we use and credit them accordingly to inform our readers. Not all models disclose their training methods fully, but we conduct thorough due diligence with every new model to understand its origins and limitations. We do not use models specifically trained to replicate the work of individual artists.
  2. Addressing Bias: We acknowledge the inherent biases in AI models. For example, if a model tends to depict venture capitalists as men, we consciously ensure our imagery is diverse and inclusive.
  3. Regular Reviews and Audits: We regularly review and audit our use of text-to-image tools to ensure fairness and inclusivity. We stay updated with evolving laws and regulations to remain compliant and responsible.

How We Use Text-to-Image Generation Tools

When creating images for our articles or projects using generative AI, our process typically involves the following:

  1. Concept Development: We define the basic structure of the desired image and experiment with various prompts to refine the concept. We avoid using the names of living artists in our prompts.
  2. Inpainting: We add or remove elements to fine-tune the image.
  3. Iteration: We go through multiple iterations using image-to-image techniques, often resulting in 10-20 prompts per project. Due to the complexity, we provide general guidelines rather than listing each prompt.

Learning to communicate with AI models effectively takes time and effort. Our team continuously trains to use these tools responsibly and to recognize and address bias.

Welcoming Feedback and Criticism

We value input from our readers and the broader community. If you believe we are using a model inappropriately or have suggestions for better models, we encourage you to share your thoughts through our feedback form.

We take all feedback seriously and will investigate any concerns thoroughly, making necessary adjustments to our practices.