The endless morass of this project.

Sep 12, 2016 20:17

A post on a Monday? Strange times, strange times. Or I'm just getting back in the habit of posting.

The project at work continues to be a black hole sucking in the time and energy of everyone involved. Even me! I ended up skipping my lunch break and working an hour overtime today over this thing. Yes I realize that sounds trivial, BUT I really try to be strict with my hours. I feel that if I'm repeatedly working overtime then either I'm really inefficient or there is more work to do than is reasonable. Also I generally prefer not to sit at a desk for eight hours straight since it's pretty bad for your health.

But there's a presentation or something tomorrow, and the relevant data weren't actually ready to analyze until late last week, sooooo...yeah. As has been typical for this project, things need to be done at the last minute. And I guess there's no one to analyze the data other than me?

This is apparently so ingrained in project manager's mind that while I was working on the video data, she comes over to ask about a weird thing with the survey data. Basically in Qualtrics (the survey software), whoever made the survey did a pretty poor job so the value labels were all wrong. So I was like, yeah, they may have been wrong in Qualtrics, a bunch of stuff wasn't coded properly. Of course she's all like, yeah they just changed the Qualtrics platform. Which is true, but is NOT the cause of the error: it was designed wrong. But this has also been a pattern of this project: no matter what goes wrong, she decides it's someone else's fault. Which is perhaps why so many things have been done so poorly on this project. But anyway so I'm like, this can be fixed by recoding the values since the labels are intact and therefore we know what each value should be. So she's like, are you busy, and I'm like, yeah I'm working on the video data (again, eight straight hours of working on this and there's still stuff to do tomorrow morning. I was very busy). So she's like, okay, I'll fix this then.

Here is the thing: recoding was the obvious solution, she already knew about the issues with Qualtrics (or at least I told her about them when we first downloaded all the survey data, so she should know), and recoding the values would take maybe thirty seconds. It took maybe five minutes or so to come over and ask me what the problem was and whether I could fix it for her. Rerunning the relevant analyses would take even less time, just rerun the syntax. So...wouldn't it have made more sense to just do it yourself? Considering how little time there is to finish everything.

But yeah I spent all day analyzing the video data. Well most of it was spent preparing to analyze the video data since I needed to create a ton of variables, due to the project co-manager telling me that there wasn't anything in particular that needed to be analyzed, just look at everything. So now the data file has 300-odd variables. For reference, you should basically never be just poking around analyzing everything, because it leads to spurious conclusions.

And beyond that, we have a sample size of 19, with 7 in one group and 12 in the other. For reference, that is MUCH TOO SMALL for the analyses we're running and we should not draw any conclusions from it. These random 19 people do not generalize to Little League coaches in general. And yet, project co-manager tells me to just write up something describing the 'patterns' we're 'finding', aka mean differences we noticed via just looking at all the means. Which are not actually 'findings' in any meaningful sense, and do not tell us anything. And I'm all like, the mean differences aren't real that large so they may not really be a pattern, and she's like, but you can see an overall pattern right? And it's like, no, we cannot, because these differences are likely due to random variation. Like obviously all the means won't be identical, so a bit higher or lower here and there doesn't mean much.

And of course she and project manager both KNOW that, they have PhDs in this field for goodness' sake. But I suppose the point is that we'll look bad if we say we can't draw any meaningful conclusions, so we have to have something to say about these data, because as co-manager puts it 'these are the data we have'. I really feel like this is just not ethical. Yeah, we aren't technically SAYING we can draw conclusions from these data, but basically we are listing a bunch of conclusions and then listing in small print that actually we can't draw conclusions. In the full knowledge that the partner organization will see this presentation and think that these conclusions are valid. We are basically just misleading them to save face at this point (we would have quite a bit more data if all of the Spring video data hadn't been lost due to the project team's fuckup with the harddrive).

We COULD do some stuff with the video data, but it would really have to be some kind of qualitative analysis that doesn't depend on sample size. The analyses we have ACTUALLY done all demand a larger sample. But qual analysis takes time, and everything needs to be analyzed quickly, so...yeah. We're looking vaguely at the mean levels of various things, deciding that there's kind of sort of a pattern, and reporting that as a 'description' of the data. Even though we all know full well that really, we can't draw anything from quantitative analysis of these data.

If we had more time we could do some interesting qualitative things imo, like looking at all the positive encouragement codes and analyzing the nature of coach's encouragement (what characteristics does it have? Is it different during different parts of the game? Are different styles of encouraging related to various characteristics that we measured?), but a) that would take time and b) that isn't really what the partner organization cared about, although we might be interested in it. I get the impression that tomorrow's presentation is almost like...justifying all the summer data collection. Which, really, we should not have done, we didn't get enough data to be worth it. For the Fall the team finally put their foot down and said the partner org has to do the recruiting as they originally agreed to (which is why we are not collecting Fall data; they didn't do any recruitment), and we should have done that way back before the Summer. Although, now apparently we're aiming to collect data from basketball coaches in the winter...which...won't really combine with the baseball data...so I don't even know.

I don't actually mind the essentially pointless nature of the analyses I have to do...but I don't like doing work that feels really unethical. Plus it's like...if people are just misusing an analysis I've done, that is a bummer, but it's much worse to have to write up these things myself. But, that's what's expected right now...I guess just admitting the problems isn't on the table.

In other news, I'm still running That One Analysis. Considering the conference is in like two weeks, I may have to use an alternate version of it just to demonstrate the data cleaning technique for this poster and then carry on with the analysis after I get back from the conference (for purposes of the paper we're hoping to write...assuming I can ever reproduce the analysis...).

And that's it...it's only Monday, after all. Nothin' but complaints about work to see here.

job

Previous post Next post
Up